 Live from San Francisco, it's theCUBE, covering Google Cloud Next 19, brought to you by Google Cloud and its ecosystem partners. Hello everyone, welcome back to theCUBE's live coverage here in San Francisco, the Moscone Center. This is theCUBE's live coverage of Google Next 19, Google Cloud Computing Conference. I'm John Furrier, Dave Vellante, my co-host. Stu Miniman's here as well. He'll be coming on, doing interviews. Our next guest is the founder and CEO of Signal FX, Karthik Rao, and Leonid Agoldnick, EVP of Engineering. Signal FX has been a great company, we've been following it for many, many years. Pioneer in a lot of the monitoring and serviceability of applications. Now, prime time, the world has spun through their doorstep. Karthik, congratulations on your success. It's prime time for your business. Yeah, thank you, John. Welcome back. Great to be on again. I'm glad that you're on because, you know, we talked six years ago about some of the trends. We saw it early. We saw, you know, the containers, Docker movement, and obviously Kubernetes got massive growth. But you started, you had the visibility of what these services are going to look like. Cloud web services is kind of the next level. It's kind of here right now. Yeah, absolutely. There are two things that we predicted would happen. One was that architectures would get a lot more distributed, elastic, and it would require more low latency monitoring system that could do real-time analytics. That was one of the key changes. And then the other thing that we'd predicted was that developers would get more involved in operations, which is the whole DevOps movement. And now both of those are very much in the mainstream. So we're really excited to see these trends. And looking at the Google keynotes today, obviously we're starting to see the realization of true infrastructure as code. You're starting to see the beginning signals of, look, we can actually program the infrastructure and not even have to deal with it. This is key. And you guys have some hardcore news, so let's get that out of the way. You guys got some updates. Let's get into the news and then we can get into the conversation around what you guys are doing in the industry. So today we're bringing three things to the conference and to both customers and prospects, starting with announcing our support for Cloud Functions. Cloud Functions are great technology that we see and adopted by retail. For spiky workloads, things where you have a flush sale and you need to understand what's happening and maybe lasting minutes where our platform really shows off the best, which is the one-second resolution data. Some of our flush sales we see from existing customers don't last a minute. So looking at this in a minute resolution or being able to react to this in a machine time rather than human time is something that our customers now expect. The second thing we're focusing on is Istio and Istio and GKE specifically. We see in service mesh adoption continuing to go both in new modern application as well as taking legacy workloads and unlocking the potential of taking those legacy workloads to the cloud and with Istio and specifically our microservices APM, it's not just applicable for microservices. We see a lot of our customers realizing a lot of value from tracing abilities that a service mesh like Istio provides an ability to understand your topology and service interactions for free out of the box whether it's on premise with Istio or on the Google environment. And then lastly, as we see customers and prospects adopt Kubernetes, we're also starting to see the next layer about Kubernetes coming in. And with Knative getting the support out of the box whether it's the dashboards, the tracing and the metrics and that's the third announcement we have today. We're fully integrated with Google's offerings and we're able to monitor and provide you with some actionable content just in a flick of a switch. So support of Knative out of the box. Out of the box. Full SignalFX with Knative has gone Google Cloud. That is correct. So there are just three things. Karthik, I wonder if you could give us some insight as to what's going on in the marketplace. So multi-cloud is obviously a tailwind to you but multi-cloud to date really hasn't been a strategy. It's sort of an outcome of multi-vendor. So is multi-cloud increasingly becoming a strategy for your customers and what specific role are you playing there to facilitate that? Yeah, absolutely. I think particularly most of the larger enterprise accounts tend to have a multi-vendor strategy for almost every category, right? Including cloud which typically is one of their largest spends and you know it's typically what we see is people looking at certain classes of workloads running on particular cloud. So it may be transactional systems running on AWS. You know a lot of their more traditional enterprise workloads that were running on Windows servers potentially running on Azure. We see a lot of interest in data intensive sorts of analytics workloads potentially running on GCP. And so I think larger companies tend to kind of look at it in terms of what's the best platform for the use case that they have in mind. But in general, they are looking at multiple cloud vendors. So here's some customers on stage today talking about their strategy. I think Thomas asked one retail customer how'd you decide what to put where? And essentially he said well it's either going to go into the cloud, lift and shift. We're going to refactor it, reprogram it essentially or we're going to sunset it. What he didn't say is we're going to leave some stuff on-prem which somewhat surprised me because of course especially in the financial services you're going to get a lot of stuff left on-prem. So what's your play with regard to those various strategies and for the legacy stuff, I know you're cloud native that's your claim to fame but can you help those legacy customers as well? Talk about that. So I think what we've seen is it's a given now that organizations are going to move to cloud. It's a question of when, not if. And the cloud form factors are fundamentally different. They're software defined, right? A traditional data center you're monitoring network equipment, storage devices, you're monitoring disks and fan failures on individual servers. When you're running in a cloud, it's a software defined infrastructure and it's far more elastic. And so even if you're just lifting and shifting how you think about monitoring and observing this new cloud infrastructure is fundamentally different. So we're there for the very first step of the journey for an organization to get the visibility they need into the new architecture. And many times we're also helping them understand the before and after. So how do I compare my performance in my on-premise data center to what it looks like in the cloud? That's step one. Step two is they start chipping away at those monoliths or they have net new initiatives that are digital initiatives that are running in Kubernetes or container-based architectures, microservices-based architectures. And that is a fundamentally different world. How you observe and monitor, deploy, not just monitor, the entire supply chain of how you manage these systems is different. So there they have to look at different solutions and we're obviously one of the key players helping them there. Leonid, we've been doing theCUBE now for a decade and I think John, it was a decade ago, we said we made the statement that sampling is dead. So I love your approach, you're not just taking small samples to do your performance monitoring. What's the architecture that enables you to do that? Can you talk about that a little bit? So I think the most interesting thing was more modern architectures, especially with microservices adoption is the complexity of how the transaction flows through the system. And then basically tossing the coin like we used to be able to do in previous generations to capture some traces and get the data you need doesn't work anymore because it's very tough to predict at the beginning of the trace where the transaction is going to go. We take a completely different approach on the market. We look at every single transaction at scales. We have prospects that are talking to us about volumes of giga-span a minute. So one billion spans observed a minute and with some of the interesting tech we've built, we're able to pick the interesting things. And the interesting things have a couple categories, transactions that occur infrequently, transactions that are maybe above P90, the slow ones, because when you look about performance and understanding of how the application performs, you really want to know what's slow, not what's normal, but you also have to capture enough what's normal. So with some of our tech, we're still able to keep about 1% of transactions, but the right ones. And that's the biggest differentiator was what we put together for the APM product. One of the things I want to talk about, dude, you guys, is how you relate to some of the Google's announcements. The key things that I'm oversimplifying now, but they got a serverless kind of announcement, got a cloud run of things, the regions, which is global, and then obviously open source commitment. You mentioned functions, you mentioned Knative, obviously open source. You're seeing open source being much more of a production IT capability, so you guys obviously hit that with your solution. So the question I have for you guys is how hard is it for you guys to provide that real-time monitoring because Google needs to build an ecosystem, that's what they're not talking about. They didn't really talk about on stage their ecosystem, so you guys are a natural fit into Service Mesh, which they showed on stage. Jennifer Lin showed a great demo. So Google has to build an ecosystem. You guys are clearly positioned through your announcements that you're deeply integrated with Google. Cisco announced an integration. Obviously they have an integration. So integration seems to be the secret sauce with cloud to play in this ecosystem. Could you guys elaborate on that dynamic because it kind of changes the old formula for ecosystem? It's very different, right? In the old days, you had proprietary systems. So the only way you could actually build an integration is you had to get your product managers in a conference room with the vendor and get visibility in their roadmap, access to everything, and that's why it just took a lot longer to get things done. I think what you're seeing with Google is they've taken a very standard-based approach to everything, right? So whatever technologies that they're releasing, they're trying to build it as a standard. You can run it on any cloud. Instrumentation is a core part of their philosophy of any technologies that they're releasing, such that you have a new platform. It has a metrics library or their standards-based mechanisms to collect metrics, traces, events. What that does is it makes it easy for the ecosystem to just pick it up, right? Our belief has been in the old days, monitoring was all about proprietary instrumentation and collection. Today it's all about analysis. So the fact that all of this is openly available in open source or standard-based mechanisms is great for us, it's great for the customer, it's great for the ecosystem. And that's their one-to-many way of building integrations. And that's why you guys are supporting Knative as an example. That's really kind of supporting the open source ecosystem. Yeah, I mean we generally support, our customers are running in every single configuration and type of technology you can imagine. So it's our philosophy to just be everywhere they are and to support all the tech that they might be running. But in general, we're big supporters of open source and that developers are now running most software. That's the world of web services and SaaS. And developers have a preference for understanding the stacks that they're running on and being able to control it. And so that is obviously why open source is just taken off the way it has. I think the other dynamic of embracing open source and start-ups, it allows us as a vendor to focus not on the meetings with product managers and kind of getting the insight into the roadmap, but on getting the standards-based integrations deeply configured with some of the, for example, content we provide out of the box for Istio and Google versus for Istio on-premise, for Istio anywhere else. And that's where the differentiation and the value for the customer is not in kind of getting together on the roadmap and figuring out what to build next. You guys can move faster, take advantage of the lift that they get. I love you guys, if you guys could just tend a minute each to explain signal of X value proposition. Because you guys I think are perfectly positioned now as this becomes infrastructure as code with cloud. When should a customer call you guys? When are you guys needed? When do you guys get called and where are you winning? Take a minute to explain when and where you guys fit into the customer environment. I would say as soon as a customer starts to leverage a cloud infrastructure, whether that's public cloud, private cloud, you know, OpenShift, OpenStack, Pivotal Cloud Foundry or a public cloud, how you monitor your infrastructure will be fundamentally different and we can help you with that. And then along your journey, once you've moved to cloud and you start thinking about how do I build modern application architectures, modern web services, DevOps, then we are necessary. You cannot get to the cloud native stage where you're releasing software every week unless you have a monitoring system like CydmalFX. Great, great. I want to also get your, pick your brain on some dynamic that I saw in the keynote. It might not be obvious to the folks that are the mainstream, but Jennifer Lynn gave a demo of taking a workload and porting it over with the small script, no code modifications, running it on a container. Cloud vMotion, we call it, basically. Anthos Migrate was the product, basically migrating workload into containers in the Kubernetes engine automatically with no rewrites. She said, what you want, what you, what you want, where you want. So that kind of, I can see what she did there and that's really cool and that's a game changer, that's infrastructure as code. But then she moved to a conversation around service meshes. Because once you get these things on the containerized inside the Kubernetes engine, you're kind of enabled for using service meshes. This is like the holy grail of microservices. This is a big growth area. Can you guys explain what this means? What does this service mesh mean? Because once these workloads start to be containerized, you're going to start to see much more migration to this new model. Where does service mesh kick in? Why is it important? And what should people pay attention to? Well, I would say one of the fundamental challenges of microservices is what people are calling more and more observability, right? Because you have so many systems. Like a single application or single transaction, what is an application anymore? A single transaction can flow through dozens, hundreds of individual microservices. And you're changing your applications all the time. So figuring out when you've introduced a problem very quickly is a big challenge. And so one of the big benefit service mesh brings is it provides automatic instrumentation of your applications and requests in a way that makes it very out of the box to get visibility across your entire environment. So that is step one, getting that visibility. The next step is then you obviously need to analyze this corpus of data and it's massive. And that's where a solution like signal effects comes in. We can collect all this data and help you really to signal from noise. Then the last step really is how do you take action on that data? How do you automate responses? Whether it's rolling back a canary release or shifting a load balancing strategy so that if there's a bad node, you stop sending traffic to that. All of that can be automated. And so what service mesh is doing is it's providing the substrate to allow you to really provide that closed loop automation that infrastructure is code. That's the movement that everyone is really focusing on right now. It's a key technology to enable that. Talk about the observability trends because this has been a hot venture funded area. We hear trace, dynamic tracing. These are techniques. There's a variety of different mechanisms for observability. How does Kubernetes and now service meshes impact observability? Where is the puck going to be? If you're going to skate through where the puck is, what's the state of the situation? Well, I think what it does is it makes instrumentation a lot easier. So typically the challenge when you're running an old Java application from 10 years ago, getting visibility into the app, it's a monolith. You have to get the full visibility and the full call stack. That's harder to collect. When you're in a microservices world with service mesh, you're getting that visibility automatically. And what becomes more important is understanding the east, west, latencies across all these different microservices. So because instrumentation is so much easier with all of these new technologies, what it means for monitoring is it really shifts the focus to who can make the most sense of this data, who can provide assistance to the operators to really help them pinpoint when there is a problem, what is the potential cause, and to triage it very quickly. So again, the whole value proposition is shifted to the analysis. So Leanne, given that, what are your engineering priorities? You know, maybe share a little roadmap. Sure. So if you think about what we just talked about, the adoption of Kubernetes or service meshes, the challenges that those environments bring is both the femurality of the environments on which you now deploy, compared to what most of the operators and application developers are used to, as well as the constant motion within the system. Kubernetes when we move the workload several times an hour and the amount of data those systems tend to generate becomes very difficult to cope, not just a monitoring system, but to a human. So how can you take what Kathik talked about, all this noise and get it into an actionable intelligence across tens of millions time series an hour possibly, and in the middle of the night, how do you get the operator to the root cause very quickly and what kind of technologies do we need to have as a vendor? And that's where we spend a lot of time thinking about, how do we provide the actionable insight for those highly ephemeral environments that are getting even more ephemeral? One of the themes that's here and already we're seeing pop out of Google next, and we've seen it in the other cloud shows we've gone to is complexity is increasing. And the business model that seems to work well is taking complexity, making simple. Whether it's abstraction layers or other techniques, how does a customer who's got all these new suppliers, new dynamics, new shift in the marketplace, new business models, how does a customer deploy IT, deploy cloud and move the complexity to a simplicity model? This is a hard challenge. I think that's one of the fundamental mental model shifts that an organization needs to make. Complexity was your enemy in the old days, right? Because you were releasing software once a year twice a year, and so you don't want it to be complex. But if your goal is speed and innovation, you're going to have to accept some complexity to get that speed and innovation. You just have to decide where is that complexity acceptable and how do you change your processes and your tooling to minimize the impact of that complexity? So I think I would disagree with that sentiment because I think organizations have to start thinking about things differently if they really want to. So embrace complexity. You have to embrace complexity and you have to think about what are the mitigating factors I need to take in my organization structure, my processes, my tooling to compensate for the additional complexity I'm creating, but still release software as quickly as I use it. I mean, I would add, I think a lot of ways you're shifting the complexity from infrastructure management up to more up the stack, and in many ways, IT is getting more and more complex to your point. I mean, all of these abstractions make perhaps the underlying infrastructure less complex to manage, but you're absolutely right, Dave. The applications will become more complex, right? When you move to microservices and you've got 50 pizza box teams working on a bunch of microservices, there's a organizational dynamic as much as there is a tech dynamic, right? How do you get these 50 teams to communicate with one another if there's an issue, an incident? And the data pathways, the data pipelines, the journey of that data is much, much more complex. Final question, as the developers and operators come together, that seems to be a big trend. Developers want frictionless environment, programmable internet. They're going to be spitting up these services and then the operators have to run it. That's worlds that come in together. What's your thoughts on the operations side and developers coming together? I think there are two pieces in a pod. I mean, there are two parts, there are two necessary parts. I think you will see more and more automation kind of move up the stack. I think the place to start is really in the infrastructure layer, and it will make the lives of operators of these cloud environments simpler. And then, I think that automation will move up the stack as well over time. What's the most important story coming out of Google Next? If you can just kind of read the tea leaves, get the sense of what's going on here. 2019, whole new year, whole new game changing. What are you guys' thoughts of what's kind of going on in the cloud business issue? What's going on at Google Next? What's the big story? Well, I think from my perspective, it's very clear that they're focused a lot on multi-cloud, cloud agnostic, and where the right ones run anywhere and run on Google, that seems to be a big push. And then, the other is they're just behind on go-to-market, and they seem to be focusing quite a bit on investing in all of the other elements, non-technology elements, to make organizations successful. Leonid, on the tech side, what are you seeing as the big end story here? I think Google was always strong on the tech, and they're continuing to deepen it. I think more interesting for me, the story is about the go-to-market and embracing the complexity of the enterprise, and recognizing that not every application that will come to Google Cloud will be architected in a modern way. There are thousands upon thousands of applications that have to lift and shift still, and surviving some of the announcements around the service mesh are great enablers for those customers to start embracing the cloud technology. Techkeeks loves service mesh. I'm a big fan. Guys, thanks for sharing the insight. Give a quick plug for what's going on, Sir, Signal FX, what's going on with the company? What are you guys looking to do? You're hiring, you're expanding, what's going on? We're in rapid growth here as a company, really excited about microservices APM product that we introduced last year, and what that does is it brings distributed trace analytics to our core monitoring platform. So what that allows you to do is get bottoms-up visibility into each individual component through our metric system, but then also a transaction-oriented view through our microservices APM product, bringing the two together. Super excited about the level of sophistication and analytics that it's going to bring our customers. Where's the headcount now? We're about 250 people right now. 250, okay, and you've raised over nine figures, I think. Over $100 million, yes. So, Clark, as a founder, what's it like to have the vision early and seeing and staying in the course? And you stayed on the right wave and now the wave's gotten bigger. What's it like to be the founder and be where you are now? It's terrifying at first because you don't know if the markets are going to move in the direction you need them to, but it's very gratifying when that actually happens, and we're very fortunate that the world is moving very squarely into cloud-based architectures, and not just cloud, but all of these modern runtimes that are exactly what we predicted the world would look like. And you had a great team, engineering team was solid, you got great chops. Any advice for entrepreneurs out there who are now getting into this world? Maybe younger entrepreneurs are coming out and building some applications. What's your advice to other founders that are? I could spend hours on that topic. I think you just have to continue to have faith and conviction in your beliefs and stick it out because there are lots of twists and turns, especially in the early days. If you're betting ahead of the curve, you need to be patient and continue to have belief in yourself and your ideas. Well congratulations, the world's right. Spun to your doorstep. Congratulations with signal effects. Thanks for coming on theCUBE. We're here in San Francisco for theCUBE's coverage day one of three days. I'm Jarvis Dave Vellante. Stay with us for more live coverage after this short break.