 Welcome back. We're at Flink Forward, the user conference for the Flink community put on by data artisans, the creators of Flink. We're on the ground at the Kabuki Hotel in Pacific Heights in San Francisco. And we have another special guest from Better Cloud, which is a management company. We have Sean Hester, director of engineering. And Sean, why don't you tell us what brings you to Flink Forward and give us some context for that. Sure, sure. So a little over a year ago, we kind of started restructuring our application. We had a spike in our vision where we wanted to go a little bit bigger. And at that point, we had done some things that were suboptimal, let's say. As far as our approach to the way we were handling the way we were generating operational intelligence. So we wanted to move to a streaming platform. We looked at a few different options and after pretty much a bake-off, Flink came up on top for us. And we've been using it ever since. It's been in production for us for about six months. We love it. We're big fans. We love the roadmap. So that's why we're here. Okay, so let's unpack that a little more. In the bake-off, so your use case is management. But within that bake-off, what were the criteria that surfaced as the highest priority? So first, we knew we wanted to be working with something that was kind of the latest generation of streaming technology. Something that had basically addressed all of the Google Millwheel paper, big problems, things like managing back pressure. How do you manage checkpointing and restoring of state and distributed streaming application. Things that we had no interest in writing ourselves after digging into the problem a little bit. So we wanted a solution that would solve those problems for us and it seemed like had a really solid community behind it. And again, Flink came off on top. Okay, so now understanding sort of why you chose Flink, help us understand better cloud service. What do you offer customers? And how do you see that evolving over time? Sure, sure. So you've been calling us a management company. So we provide tooling for IT admins to manage their SaaS applications. So things like the Google suite or Zendesk or Slack. And give them kind of that single point of entry, the single pane of glass to see everything, see all their users in one place, what applications are provisioned to which users, etc. And so we literally go to the APIs of each of our partners that we provide support for, gather data. And from there it starts flowing through the stream as a set of change events basically. Hey, this user's had a title update or a manager update. Is that meaningful for us in some way? Do we want to handle a particular workflow based on that event? Or is that something that we need to take into account for a particular operational intelligence? Okay, so you dropped in there something really concrete, a change event for, let's say, the role of an employee. That's a very application specific piece of telemetry that's coming out of an app. Very different from saying, well, what's my CPU utilization? Which will be the same across all platforms. Correct. So how do you account for, let's say, applications that might have employees in one SAS app, and also employees in a completely different SAS app, and they admit telemetry or events that mean different things? How do you bridge that? Exactly. So we have a set of teams that's dedicated to just the role of getting data from the SAS applications and emitting them into the overall better cloud system. After that, there's another set of teams that's basically dedicated to providing that central canonical view of a user or a group or an asset, a document, et cetera. All of those disparate models that might come in from any given SAS app get normalized by that team into what we call our canonical model, and that's what flows downstream to the teams that I lead to have operational intelligence run on them. So just to be clear, for our mainstream customers who aren't rocket scientists like you, when they want to make sense of this, what you're telling them is they don't have to be locked into the management solution that comes from a cloud vendor, where they're going to harmonize all their telemetry and their management solutions to work seamlessly across their services and the third-party services that are on that platform. What you're saying is you're putting that commonality across apps that you support on different clouds. Yes, exactly. And we provide kind of the glue or the homogenization necessary to make that possible. Now, this may sound arcane, but being able to put in place that commonality implies that there is overlap, complete overlap for that information, for how to take into account and manage an employee onboarding over here and one over there. What happens when, you know, in applications where, unlike in the hardware where it's obviously the same no matter what you're doing, what happens in applications where you can't find a full overlap? Well, it's never a full overlap, but there is typically a very core set of properties for a user account, for example, that we can work with regardless of what SaaS application we might be integrating with. But we do have kind of special areas like metadata areas within our events that are dedicated to the, let's say, the original data fresh from the SaaS applications API. And we can do one-off operations specifically on that SaaS app data. But yeah, in general, there's just a lot of commonality between the way people model a user account or a distribution group or a document. Okay, interesting. And so the role of streaming technology here is to get those events to you really quickly and then for you to apply your rules to identify root cause or even to remediate, either with advice of advising a person, an administrator, or automatically. Yes, exactly. And plans for adding machine learning to this going forward? Absolutely, yeah. So one of our big ask as we started kind of casting this vision in front of some of our core customers was basically, I don't know what normal is. You figure out what normal is and then let me know when something abnormal happens, this is a perfect use case for machine learning. So we definitely want to get there. Running steady state, learning the steady state and then finding anomalies. Exactly, exactly. Interesting, okay. Not there yet, but it's definitely on our roadmap. And then what about management companies that might say, we're just going to target workloads of this variety, like a big data workload where we're going to take Kafka, Spark, Hive, and maybe something that predicts and serves and we're just going to manage that. What trade-offs do they get to make that are different from what you get to make? I'm not sure I quite understand the question you're getting at. If there's where they can narrow the scope of the processes they're going to model or the workloads they're going to model where it's say just big data workloads and there's going to be some batch interactions, interactive stuff and they are only going to cover a certain number of products because those are the only ones that fit into that type of workload. Gotcha, gotcha. Yeah, so we've kind of designed our roadmap from the get-go knowing that one of our competitive advantages is going to be how quickly can we support additional SaaS applications. So we actually baked into most of our architecture some of this very configuration-driven, let's say, versus hard-coded. So it allows us to very quickly kind of onboard new SaaS apps. So I think that winds up the value of being able to manage and provision, run workflows against the 20 different SaaS apps that an admin in a modern workplace might be working with is just so valuable that I think that's going to win the day eventually. Single plane of glass, not at the infrastructure level but at the application level. Exactly. Alright, we've been with Sean Hester of BetterCloud and we will be right back where at the Flink Forward event sponsored by Data Artisans for the Flink user community, the first ever conference in the U.S. for the Flink community and we'll be back shortly.