 Welcome to our panel on what is serverless. So I'm sure some of you have sat in on a couple of sessions today on the deep dive in different ways to do serverless, different projects. This panel started out by us having a conversation in a couple of different cities at the same time over Slack and just said, hey, it's a buzzword, we should do a panel on it and really talk about what it is. So that's how it started. So I'm going to let everyone introduce themselves. Hey, all my name is Ruben, I work as a developer advocate for the office of the CTO in IBM Bluebox. Thank you for being here and we really hope that you guys at least get something out of this. Hi everyone, I'm Kenny Bistani, I'm a spring developer advocate at Pivotal. I'm Casey West, principal technologist for Cloud Foundry at Pivotal and frankly, I hope I get something out of this. I'm Tyler Britton, developer advocate for IBM for Bluebox as well. And I'm Kim Bannerman, I'm office of the CTO, but technical advocate team at Bluebox IBM. Cool. So beyond the technology buzzword, folks, what is serverless? All right, I'll start. I have the mic, so I'm taking control. Trump style. Trump style is the interrupt. Yes. So to me, I think a lot of people have similar reactions to serverless. Like, well, there's definitely servers, so what's the serverless thing? And I think it's just the move up to that next level of abstraction. So term I've heard recently that I'd like a little more function as a service. So the idea of a subset of code that does a specific thing that runs when it's called, event-driven computing is decent too. I think that's fair. So who else is old and remembers CGI's? Yeah, so I feel like it's everything up into the request and event loop being handled by something else, which is kind of like how CGI's used to roll, except there are some better, some more enhancements for performance. But that's how I see it, too. It's a function, and it's about what you can abstract and then allow a provider to give you that you can trust. And it used to be that we couldn't do that with things like the actual server that manages the request management and the IO loops, but now it seems we can. So I like to think that serverless is, again, functions. I agree with both of you, but I think, too, it kind of fills a gap, especially when it comes to integration, where you want to deploy just this one piece of code. And you typically, like in the past, when it comes to integration, you might have a SQL server agent job. And it's just very easy to change. It's easy to deploy. I think that we have other questions later that I'm going to talk about why you shouldn't actually use it for a full application, but that's pretty much it. Pretty much in agreement with them. To me, the main motivator is debunk the buzzword. You see it all over the social media. People are abusing it. Serverless there, serverless there. What does that even mean? So it's a question that I want you all to ask yourselves when you see something serverless. What exactly do you mean by that? Because there are servers back there, and there's a lot of things that go into that. Speaking of Kenny, what are the good use cases? You want to go? Yeah? No. OK, I'll call that. All right. So as far as use cases, from what I looked at, doing some research, because I haven't actually built a full application with it. I don't know if many people have, but it really kind of fills, for me, a big use case. It fills this gap where you might want to have an API gateway on top of a legacy system. So you need a way to integrate with a legacy system. You want to do translation from SOAP XML to JSON. Maybe you have some other business logic in there. I think it should be lightweight. I think that it should be done in a way, not smart, and try to do too many things with routing. But a simple translation layer that you can put on top of your legacy system, and maybe then integrate with a microservice architecture. I suppose that's dovetails with some of the things that I've seen, background jobs, data processing, anything with discrete inputs and outputs with relatively low side effects. This kind of comes back to functions as a service, functional programming models, where you can rely on a function to behave the same way to have relatively low side effects. And then you can get a lot of benefit out of maybe fanning out a large number of instances of these things. The other thing that I see, sort of from an operational perspective, is the opportunity for more efficiency and resource utilization. If your process only runs when it's called, even if it's run millions of times a minute or something, then you're still able to reuse hardware and physical resources more efficiently than if you have long running processes stuck in memory all the time, maybe being idle more often than they're not. Yeah, I think to me anytime, if you're looking at a whole block of code for an application, and there's a piece that you describe it as, it waits for this to happen and then it does this thing, are really good fits for this type of methodology because you can say, it's not running when it's waiting, it gets woken up and run, and I think the other piece you hit on is really important, is the scalability of it. So those type of jobs generally are single threaded that are doing one specific thing, say, working on one file or one piece of data that's crunching. So they're easy, you can make them parallel really easily, and that's where the scalability comes in. So let's say your function is, whether it's an API call or it's processing an image file or something like that, you're basically spinning up an instance for each event. So if you get 10 million API calls one day, you're spinning up 10 million instances to do that job and finish it, and the next day you could may spin up 500. So it gives you that flexibility for something that's relatively stateless, just what that individual job has to do. And then it gives you that scalability versus a model, like whether you're using physical servers, even a pass where you have workers, if you will. So if it gets busy, I need to spin up more workers or I spin down workers. In this case, it's happening at the individual job level and it's much more granular. Serverless and what Node.js has been doing for like five. Can you say that for the questions? What's the difference between what Node.js has been doing for however many years and serverless? Might as well, let's go for it. What? Sorry, no, I was gonna finish the thought. You guys? Yeah, no, go for that. So one actual application that I signed the while was with some former colleagues they're trying to, with serverless, it's small functions that get called every time that and they have no problem with, in hand problem with concurrency, they have no problem accessing the database, et cetera. So like any use case that you can, that you don't have to worry so much about, again, like it writes to the database or reads to the database that is not that type of thing, they lend themselves really well for these serverless architectures, so to speak. Sounds good. So Node has the event loop that's pretty well handled and abstracted and I think that this is actually just a model again where, this is what I keep going back to where the event loop is externalized and handled by a third party provider that you can rely on, I'll use the term vendor to very loosely mean someone else to provide you a reliable event loop so that you can execute small bits of code and memory efficiently and then get those resources back when it's done to be used for other things. So I don't think there's a lot of difference. So what aren't good fits? What's a bad fit? So I think a bad fit would probably be building your entire application. Let's say you're primarily programming in Java and you wanna build your entire application architecture as serverless with functions essentially and we're already doing that. It comes down to the unit of deployment. How much business logic do you want to be able to deploy independently of one another? So I think that a bad fit probably would be going serverless first and building an entire serverless architecture because we're already doing kind of that with microservices but we're building around business capabilities with microservices. It's well defined. There's really not a lot of, I guess, stories out there with success with serverless. I think it's early days still. As I mentioned previously, bad use cases would be code that you need to put there for events that require some sort of synchronicity or some sort of order. Again, we're gonna be firing off an independent execution of these events. So if you require a strict order or if you require anything that requires a little marshalling of data or any sort of strict reads, extreme writes, et cetera, I would say those are pretty bad fits for a serverless architecture. I would throw in there that in addition to that, anything with side effects that you're relying on, if you can't handle in your architecture eventual consistency now then don't bother with a serverless architecture because it's gonna get worse to your point. I think expectations of atomicity, is that a right word? Yeah, thank you. That one was hard, yeah. Are kind of thrown out the window in this model to use it effectively. As soon as you start blocking then you've kind of broken the model. Yeah. Yeah, anything that with your code would call this function normally, say it was in a monolith and it just didn't return that your app would have a bad time is not a good fit. As I said, you're handing off this instead of being part of your code, it's this external service. And as highly available as any of the vendors or users try and build these architectures, sometimes functions just won't run for some reason. It's technology, right? It breaks. So if you don't have that type of logic to handle that and either resubmit it or however you're doing that, it's gonna be a really hard time. And then we're talking at sort of a technical level but I'm curious, I'd like to pull the audience. How many folks here are currently involved in some way with a microservices strategy? Who's into that? Okay, now keep your hand up if you think that your organization is able to effectively manage and deliver microservices right now. Okay. So now maybe you have, this could be anywhere from say, two microservices to maybe hundreds, maybe you're at that level now. But at the serverless architecture, to Kenny's point when he talks about the ability to deliver on smaller and smaller increments independently, this takes it to sort of an extreme level. And honestly, I don't know the answer. I don't know anyone who's effectively cataloging and managing a collection of functions across a distributed graph like architecture right now. So I would say don't just jump into it without thinking ahead a little bit about how you're going to manage this once it gets proliferated. Yeah, one thing I would add to that is definitely if you think about your whole CI CD type of pipeline with regular coding, it gets even more important with serverless because it's not this block of code where you can run all your tests against it because you've chunked it off in these little pieces. So that's when the individual unit tests and integration tests become even more important and then how you handle the failures of those as well. So what I mean, what I've seen with one application that I was messing with it, that I'm using serverless for, it started out in a microservices type approach and the one of the microservices were basically workers that waited for an API call, did some stuff. So then I said, well, hey, this would be a perfect fit to pull these out. So then there's pieces of the application that are still stateful, that are running all the time, connecting to databases, but those worker microservices are now replaced with serverless. But a big piece of that was moving to a full, I mean, the fact that I set my own Jenkins server just to make sure if I made a change to it, it didn't break the whole thing because of the independent nature of each of the individual function. So what is the impact serverless has in organizations? Is that what? Yeah. So kind of along what Casey was saying, it's now that it's fashionable to, for people like team managers, et cetera, CTOs, oh, let's go with serverless, let's try a POC or whatever. So it's easier said than done, right? Like now, oh yeah, serverless is easy, you just do this, do that, you just have to rewrite your application, that's all. And then, so, but beyond that, it's just even operational perspective, like let's say that you have certain parts of your application that you have offloaded to event-driven or serverless architectures, and all of a sudden, your ops persona reliability engineer gets paged at 2 a.m. And then, are they trained to know how to debug a function? Or are they gonna require a developer to actually go and look into that, into the logs and figure out what's going on? So there are certain aspects of serverless that need like cultural transformation and organizational transformation. So it's not just beyond, it goes beyond the technology itself, technologies come and go, but however, what drives as successful in my opinion, as successful deployment of any technology is the correct organizational embrace that they give. So it's easy to say, oh, we have the serverless, just again for the buzzword, you get free marketing, blah, blah, blah, but push comes to show what is your organization, is your organization ready to be actual serverless? So I think that the organization really has to look at these functions and determine who owns them, right? So what we're learning today with microservices is that we have these teams who are organized around business capabilities and they're responsible for building and operating an application. But with functions over time, like who inherits a function once it becomes legacy, right? What's the life cycle of that? Is it easy to decompose? What's the size of these things? I think that we are lacking a lot of things in clarity around what we should do with serverless. I think that there's definitely a few use cases there that are very helpful with filling in gaps especially, but I think there's a hybrid approach that is yet to emerge between microservices and serverless. So I like to try and be optimistic about what the impact will be, but then the pessimist in me, I'll start with that says that nobody knows how to actually implement this effectively. We're here at the Cloud Foundry Summit, CF Summit, and we're talking about this where there are a couple of early attempts at serverless architecture built on the Cloud Foundry stack. I think they look fairly promising and they're interesting, but none of us have the technology in our production environments to make this happen right now at scale unless we're running on Amazon and we have Lambda available and we can actually take advantage of that. So part of me says nobody's quite ready yet. There's another pessimistic side of me that says a huge part of this is about either being able to have this code sort of pre-loaded or ready to execute without having to do the, to incur the startup cost every time you wanna run a function. And if you don't have a model where it's already ready to go to start executing on business processes without having to boot up a JVM or even Ruby or Node or anything else, then you're gonna be kind of screwed from a performance perspective. So we still have some work to do, I think, on the operational side to make environments capable of providing these to us. But from an optimistic perspective, once that work is done, and we'll just hand wave and say that it's gonna be magically sorted out by the Cloud Foundry community. Then I see a great opportunity for increased resource utilization efficiency, the ability to cut down on the amount of electricity we need to use in order to power our applications at larger and larger scales. I think if we combine serverless as an architectural paradigm with things that are happening with unicernals and the emerging technology there, then we have the opportunity to really cut down on the footprint for our applications and continue to scale up and up. And so as an optimist, I see that as a future that's probably, frankly, four to five years out, but to be mainstream, but is on the horizon. Yeah, I think I would take a little bit more aggressive approach from the standpoint of these are one of the, if we think about it today is when EC2 was first announced, the idea was like, well, I mean, it's nothing really new technology-wise, is we can just rent servers by the hour instead of buying them for three years. And I think this is that natural evolution of, now we're renting servers by the microsecond instead of by the minute or hour, day or year. So I think those efficiencies drive not just power and cooling, but then also drive cost efficiencies. So I think that's gonna be the main push behind it, which may speed up the adoption. But I definitely think from an operational perspective where that's where companies need to figure out this is different than what they're used to. So then like for example, to your point, let's say we have a function, we built, it does this one thing really well, well if other people's code starts calling this function and then we wanna swap out this function, we don't realize like what's the upstream impact. So I think that's where the tooling is kind of lacking, besides from a code pipeline perspective for serverless, but then also dependency mapping and tracking of that. Cause right now if you give me a Java project or a Python project, pretty much all the code's there, except for any external libraries and that's generally well documented requirements.txt or whatever. So I know what's happening and I can see in any IDE, I can see where that's happening. Once I pull that out and it's a totally separate thing, someone's random other app could be using that function. I don't even realize it. So I think that's where the operational aspects get tough is making sure those things are handled in a clean way. And I think that's the whole move, even to microservice and stuff. It's more about automation, orchestration, code pipeline, those type of things. I think it's a basic prerequisite. But in my estimation, I think the adoption and the use cases will quickly pick up I think faster than we're expecting as people, just like with EC2, we're like, oh, this is this cloud thing. Maybe Netflix can do some stuff with it. And then how quickly other companies got on board with it and found new interesting use cases for it. Yeah, I think the cool kids are gonna pick it up fast and the companies that have only existed for one or two years that don't actually have anything real to manage yet. But I do think that it takes a little longer to get into some of the organizations that we are in for obvious reasons. But you know, the thing that you mentioned about the proliferation and the ownership, Kenny mentioned this too, I think that's important. But if you've ever been in an architectural meeting with a legacy application that's sort of on the table where we're trying to figure out what to do with it, and you've just tried to propose something like let's have background jobs when we've never had them before rather than inline synchronous processing of things like things as simple as sending emails. If anyone's ever had that fight and seen people push back against that. And then to go from something like that to, well, let's not just do background jobs, so maybe let's start to decompose us into a collection of services. Maybe we'll just have one or two or three to start. And you've seen the fight against that. Going to something like serverless where you're like, let's just take this function out of this class and let's forget for a moment that it's an object and we don't know how to do functions because everything is object-oriented. But let's just call it that and then shove it into Lambda or shove it into a task-oriented scheduler. Then I think it's gonna take some time to get people on board with that because having more things to manage doesn't make your system easier to manage. That's it, that's all the questions we have. So, it's for the audience. Yeah, is anyone doing anything that they're calling serverless right now, either trying to get something like this, like a function-oriented delivery out into production? We've got someone over there. Can you talk about it at all? How's it going? Okay, who's the provider? Okay, are you using Lambda? Yes. Cool. We do no service like that. Database of service, DynamoDB. DynamoDB? Yeah. It's very true that the typical thing that I like is just a long list that can call, what's a long list that can call, what's a long list that can call? Right, so you like the Node.js model of AWS Lambda? Yeah, yeah, with DynamoDB, got it. Okay. Any questions from you? Serverless that you mentioned, have you got any more details on things in Perl or instances of that being used or is that just too early data? Yeah, I think it's early days. This is my own fascination and I haven't tried it myself yet. So I see the small footprint of a Unicolonel, the ability to strip away anything that you would need to load into memory and also a page through the CPU in order to get your application to run. I see that as a benefit of Unicolonels generally. And you can cut down on your footprints quite dramatically for many applications. So that's a benefit. And then that only helps you spin up processes even more quickly. And so I just see that along with an evented model for IO or request response that that could be a pretty interesting thing to throw together. But I don't think anyone's trying that specifically. I expect that in about five minute, Edith Levine of EMC code will come out with a demo and have something on GitHub that shows this. But yeah. I think we have to, if you can't have a panel without at least one contentious issue. So my take on specifically Unicolonels are they're a fantastic use case in the IoT world. The idea of, hey, I'm putting some, the fact that today we have things like smoke detectors that have Wi-Fi and everything and they're running like a full Linux kernel as bonkers. That attack surface, that ability to update firmware and all those things seems like it's a perfect fit for, if you're making a million of these devices, why won't you cut the kernel back to the minimum? So to me, that's a fantastic Unicolonel use case where I think it doesn't fit is more in the like say Cloud Foundry type model or even serverless type model where you're going up that next layer to the container anyway. So I don't need a whole, we're done with VMs, right? We don't need VMs anymore. VMs are, nobody uses VMs anymore, right? That's, everything's containers already, right? No, but as it moves forward, Unicolonels fit with VMs but as we move to, whether it's, you know, say Diego under Cloud Foundry or even Docker, Kubernetes or any of those other things, I don't, I want a lower profile OS like a core OS but I don't really need a Unicolonel per piece of function because I'm just doing it in a container anyway. Yeah, except those containers get big and their memory footprint with an OS in them anyway gets big and if you want to get more efficient resource utilization, if you want to use another cool term and bin pack everything as tightly as possible, I think there's still benefit there. I do agree with you that if you have to do virtualization anyway, then you already reduce some of the benefits of the small Unicolonel footprint but I see other benefits that I find interesting. Yeah, so this sounds like a question like, you know, how many lines of code is a microservice? And I'm not, I don't have a, I don't have a good answer for it. ought to be enough for anybody. 640K. Do you have an idea of yourself? Because that's tough, you know, so I think, you know, Kenny and I were talking about size of JVM and memory and proliferation of application instances kind of being a problem occasionally and I could see that playing a factor here. For instance. Yeah, so there's memory footprint and I think also time to first business process execution point, right? Like we maybe not time to first bite because you're not always doing that but like time to getting to business process from whatever startup is, those are the two things that you're gonna need to try and profile and have an understanding about but I'm not about to make a guess at what's perfect or right. The only thing I would add there is it's similar to when we see people do TCO models of public cloud, they're like, oh well EC2 costs this and you know, software costs this. Let me do the math if I buy the servers, if I install you know an OS on it and I do stuff. Oh, I could do it cheaper. It's like, well yeah, if it's fully, if you're assuming that you're getting full utilization and what do we know, most of our data centers are if they're half utilized, we're doing awesome. So that's why I think fits into that too is, hey we can build this big serverless farm, well it may make sense if we're getting a decent amount of transactions events triggering but if you're either trickling through, you've built this big environment and then you don't have any transactions to run through them. So the whole like, hey I'm only using it for milliseconds, doesn't make any sense because you already bought the servers. You know, that's a good point. Yeah, if you're on prem and you already have that capacity and you have to amortize it and use it anyway, yeah then you're there. But you know, I think there's the execution process and then there's just keeping these idle, long running, long live processes in memory is kind of a waste anyway. So even when we talk about utilization, if you have a bunch of idle microservices all just sitting around eating that memory and not doing anything, then what's the point in that too? That's really still idle capacity. Cool, there's one more question. Utilization of memory and similarly, do you see that you mentioned CIF is not trying to do something on a serverless site? Do you see that spring batch along with CIF dust that played for me like a serverless model where it's not running something? So, can you read the question? Oh that was, so can you say it again? Just summarize. So to summarize what I'm trying to say is we have got jobs running 24 by seven and we just opened a batch of 2BC spring batch along with CIF dust which is about to come playing a serverless model. So I'm gonna summarize that question is what's spring role in serverless? And I would say, so have you heard of spring cloud dataflow? Yeah, so it's like spring batch and then spring XD and then spring cloud dataflow. So right now the description of spring cloud dataflow really is that it's data processing for microservices but it resembles quite a bit a serverless deployment model. So they're recomposable so you can create these spring cloud stream modules and they're very small and then memory footprint's gonna play a role. So how do you kind of batch together these functions in a single JVM process? So I mean it's early days but there are definitely plans to go that route. I would say in the abstract too when it comes to data processing and serverless if you can work around a streaming model you're gonna get more efficient resource utilization over time with fewer bursts than if you just go with batches every hour, minute, day, whatever. You can't always do that but generally speaking if you can go with a more streaming approach to your data processing pipelines and you're probably gonna get more value out of that model as well. We're out of time so any more questions we'll be around later. If you guys can, you can tweet. Oh you have a hashtag. We'll be monitoring that, I'll be monitoring that hashtag. So that'll be having beers, I'll be looking. So yeah, just tweet at us and then with that hashtag and then we're gonna keep the conversation going guys. Cool, thanks y'all.