 Hello and welcome, my name is Shannon Kemp and I'm the Chief Digital Manager of DataVercity. We would like to thank you for joining this DataVercity webinar, Why Cloud Native Kafka Matters, for a reason to stop managing it yourself, sponsored today by Confluent. Just a couple of points to get us started. Due to the large number of people that attend these sessions, he will be muted during the webinar. For questions, we will be collecting them by the Q&A in the bottom right-hand corner of your screen. Or if you'd like to tweet, we encourage you to share highlights or questions via Twitter using hashtag DataVercity. And if you'd like to chat with us or with each other, we certainly encourage you to do so. Chats doesn't open the Q&A or the chat panel. You will find those icons in the bottom middle of your screen for those features. And as always, we will send a follow-up email within two pieces days containing links to the slides, the recording of the session, and any additional information requested throughout the webinar. Now let me introduce to you our speaker for today, Greg Murphy. Greg works in product marketing at Confluent, where he is responsible for go-to-market activities surrounding Confluent's cloud-native service for Apache Kafka, Confluent Cloud. Prior to Confluent, Greg worked within product marketing and product management at Salesforce and Google. Originally from the Boston area and now living in San Francisco, Greg holds a business degree from Miami University of Ohio. And with that, I will give the floor to Gray to get today's webinar started. Hello and welcome. Hey, Shannon, thanks so much. Can you hear me all right? Yeah, you sound great. Amazing. Thank you so much to you and the DataVercity team for having me on today and giving Confluent the floor. Hi, everyone. Nice to meet you. My name is Greg Murphy. As Shannon had mentioned, I'm a part of the Confluent team here, working within our product marketing group, and I'm excited today to have some time to take the floor and speak with everyone about cloud-native Kafka, why we think it matters, and four reasons to think about why it's time to stop managing it yourself. So just jumping right in here, I think speaking to the group here today, you probably already know the value of data in motion. So thinking about data-driven operations, building out efficiencies on the back end of your businesses, building out real-time personalized customer experiences, really cutting-edge experiences that are surprising and delighting and delivering experiences to your customers as they would expect, and in general, just opening up these new business models. All of this is resting on data in motion, real-time data, real-time actions, and behaviors. Behind that is Kafka. So we know the value and the power of Kafka today. And we see when we look out across the market that Kafka and events training is adopted by about 80% of Fortune 100 companies that are using it to power these very use cases and the experiences that we were discussing. So major, major adoption of this technology and bringing it into businesses and a growth of its use and importance within organizations. What we'd love to spend some more time talking about today is, you know, really, what is that cost? What's the toll? What does that look like for a business? And is that the right place to be investing in precious resources across your teams and your organizations? So when we're looking at this, we know that the toll and risk of managing Kafka, it grows and continues to grow over time. So thinking about architecture planning, when we're going to be launching Kafka for the first time, or likely with this group net new use cases, the processes that go into planning, sizing these clusters actually provisioning these clusters, the amount of teams between development team requests and operations teams that are responsible for infrastructure and standing up these clusters globally. So there's a lot of effort that's going into just getting Kafka off the ground. When we next start thinking about integrations, what data sources are we needing to pull from where we're needing to land data? How are we building out the necessary monitoring and reporting tools that are supporting our implementations? Kafka is updating over time. How are we managing those patches and those upgrades? We have the required security and reliability functions and features built into our implementation. And on an ongoing basis, how are we maintaining, upgrading and restarting our server set of running the technology? And lastly, as we evolve, thinking about the evolution of our use cases, our deployment, thinking about cluster rebalancing, how we're expanding clusters, how we're providing out into our organization, individual utilization metrics that may be required. We have just more responsibility as it's growing over time as related to Kafka. And with this responsibility comes complexity. And with this complexity comes a toll on our businesses and a risk to our businesses when we're thinking about focusing on this type of technology. And with this background in place, you probably are able to relate to this experience where I wanted to share today's conversation is thinking about core versus non-core and wanted to share this quote with the group today. So when we're thinking about agility, we're typically thinking about adaptability, we're thinking about speed, but focus is also incredibly important. So sharing this quote from Jeffrey Moore, author and management expert. So we're thinking about companies and how they're using their time and what he says is whatever the new thing is that creates differentiation, that's core, meaning it's core to your business. However, activity that is, that is not going to be differentiating the business and the eyes of the customer, that's non-core. So when we think about this core versus non-core, when we're focusing on non-core and we do it well, nobody really cares. If we are not successful in our implementation or delivery of non-core activities, that's where we can get burned. That's where we can get in trouble. So we have limited resources and focus. And with that it's really important to retain our focus on core or strategic activities versus spending our time and resources on these non-core activities. At Confluent, core to our business is Kafka, core to our business is infrastructure management. And what we are focused on is being incredibly, incredibly successful with undertaking the responsibilities of Kafka and taking those off of you and your team's plates, allowing you to shift your focus to those core activities, things that matter to your business, that matter to your customers, things that are going to surprise and delight them. And probably be in a spot where you're starting to think about where you may have taken this action in the past. And we're thinking about this too. You've probably made this change in other systems in other areas and starting to shift the cloud native services, offloading those non-core activities to other best and breed services. We're putting here across storage, data warehousing, databases, some self-managed options that are existing within the marketplace here, and then those fully managed services that are existing in the cloud. So these vary across the board here, but what we know is by 2022, 75% of all databases are expected to be deployed or migrated to a cloud platform. So what's important is that we start shifting our work to those same locations and working directly on top of them where they reside. So the next step here is thinking about data movement, the self-managed option of Kafka and really thinking about a fully managed or cloud native service that you're able to leverage to offload the non-core activities here and enable your teams to really just focus on differentiation and value and development. So that really leads us to where we're focused today and that is the cloud native difference with Confluent. So when thinking about the options that might exist within the marketplace for offloading your Kafka management and offloading the operational burdens that are associated with Kafka, it's important to level set on some of the options that may exist in the marketplace. So imagining that most people today are probably sitting on the left-hand side of this view here, self-managing Kafka in a DIY manner. So this is full development and maintenance without any support. This is going to be lowest on the spectrum of ease and speed of use of Kafka and therefore ease and speed of delivering new solutions, new use cases, new value. So that's right here. There are options that are going to exist within the marketplace that are partially managed or cloud hosted. So this is where you might see an easy means of provisioning and standing up Kafka, but you're still going to be left with responsibilities for manual tasks, with tools and support that are provided. So thinking about cluster upgrades, cluster patches, SLAs for that deployment, these are things that you're still going to be responsible for with these partially managed solutions. But what we want to focus on today and what we'll be diving into is what a fully managed solution looks like or what a cloud native solution for Kafka looks like. So thinking about automated product capabilities with no operational overhead. So here at Confluent, when we're saying fully managed, what we mean is fully managed. And our definition there is a cloud native service. So what we've taken is a technology that we ourselves created here at Confluent, Kafka, and we rebuilt it, we reimagined it. We built it from the ground up as a cloud service and are delivering that today as a cloud native service. So when we're thinking about how we at Confluent are defining a cloud native service for Kafka, we break these values down into really for key benefits or for tenants of our offer. So the first thing that we think is really important to be looking for in this cloud service for Kafka that you're running is that it's elastic, allowing you the ability to scale up instantly to meet the scale of really any use cases, any peak traffic and back down to avoid over provisioning infrastructure. So we're going to step into each one of these areas in more detail and then showing what show you what they actually look like inside of the Confluent service. The next tier is an infinite solution. So allowing you and your teams the ability to store unlimited data on Confluent to enhance your real time apps and use cases with a broader set of data. Knowing that the most value valuable use cases that were deployed today are based both upon real time data and historical context. So it's important to have access to an easy access to as much data as you need to be building out that system of record and fueling those use cases that you want to be building and launching. Next, it's important to be working with a service that is global. So building a consistent data fabric throughout your organization by linking clusters across your different environments. So you may be running on-prem today, you may be running in a single cloud environment that's likely going to be changing. Most organizations are either today already running multi-cloud or planning to run multi-cloud. So having the ability to run on a single service yet have access to all of your data wherever it may be and be able to move data across those different environments on-prem or in the cloud or vice versa and throughout is incredibly important to make sure that you are able to run efficiently in the future, no matter what decision you make today. And lastly, we'll touch upon the extensions of Kafka that exist within the platform, thinking about a complete platform, a complete data and motion platform, allowing your teams to go way beyond Kafka and build a launch faster with this complete platform. Step into what that means, you know, ranging from fully managed source and sync integrations through to enterprise-grade security features. So allowing you to, again, not only pick up the fully managed cloud-native benefits of Kafka, but also picking up additional tooling to enable your teams to move super, super quickly and build and launch faster. Another area that I'm going to step into here in a little bit more detail is our elastic offering. So, initially, out of the gate, getting up and running and getting started with Kafka can be feasible and not too challenging for teams, but what we quickly have to start thinking about is what we do is demand for our apps grow, demand for our service pros. Typically, most dev teams are going to model out some hypothetical demand scenario for a year, size and provision of cluster to match that demand and then get started. The demand may or may not materialize, or it sometimes may exceed those expectations. So really, this lands you in two spots, you're either going to paying for over-provisioned capacity or risking performance degradation because we under provision, we don't have that necessary capacity, which is going to result in a poor customer experience and potentially even some downtime. So really, when teams are faced with this challenge, they're typically going the route of a conservative approach over-provisioning and oftentimes picking up that increased span for not to be used or unlikely to be used capacity. With Confluent, you don't have to worry about this. So we have a couple of different options in the cluster types that we're making available to customers. Our basic and standard cluster tiers are scaling elastically between 0 and 100 megabytes per second. So you literally have absolutely no responsibilities here. This is automatic scaling. Scaling up and scaling down would do so with the exact activity inside of your account automatically on your behalf. If you happen to be running cloud-scale streams at, say, gigabytes per second in throughput, we also have dedicated clusters that are available to be scaling up to tens of gigabytes per second. And it is easy single-click provisioning inside of the Confluent UI that is bringing this to life. So really, when we're looking at these two options side by side, what might take days or weeks for setting up and orchestrating and organizing our capacity planning and peak throughput scenarios is literally now taking seconds or clicks inside of the Confluent UI. So this inherent responsibility and time and effort that was putting into, again, here what we would call non-core activities is going to be shifted to something that can be focused on actually building and developing. So I'm going to give a quick overview here. I grabbed some screenshots of how this is running. So this is a cluster that is already running in production. I'm stepping over. I am adding CKUs, which is capacity for Confluent clusters into my cluster. You can see we started running at about 300 megabytes per second. This is an actual implementation that we grabbed and captured that had been running. I'm going to run that one more time for everyone. It had me running eventually up to about 10 gigabytes per second. So again, it's simple inside of the UI. It's a couple of clicks, as we said, to add that capacity into my cluster. It can be done while the cluster is already in flight and running. So it is a dynamic addition of that capacity. And it's really exciting. So what would have typically taken a larger effort of benchmarking and hand-holding cross-functional alignment is now happening in just a couple of clicks inside of the product UI. So we're pretty excited about this and excited about what it offers the customers who are running on Confluent today. So what I want to step into is the components of an infinite solution and what you're able to pick up with a cloud-native service with Confluent that is infinite. So when we're thinking about this, in order to build these meaningful apps, as we mentioned before, it's really important to not only have access to real-time data, but also have access to historical context. There are two pieces coming together, real-time data and historical context that are allowing you to build solutions that are going to be most meaningful, most impactful to your business and to your customers. The typical implementation is to maintain historical data in a separate system and have your apps read from that system as needed. Obviously, that's complex. Every app has to deal with another data source. It has to use two sets of APIs to deal with different performance characteristics. It's all leading to complexity. With complexity, that is additional spend and it's a risk to your business. So what we're thinking about here and what we've delivered is thinking about and imagining Kafka if that data could be retained forever. All apps just need to get data from one system, from Kafka itself, both for recent and historical data. This type of implementation is elegant. It's radically simple for app development. So it's equipping our teams with what they need to be successful in building their solutions, but for the operators who are responsible for bringing this to life, again, this is still complex. There's a lot of work to deal with provisioning that Kafka storage and continually expanding it over time for more data and for additional use cases. That's why we're excited about the infinite retention capabilities within Confluent. So what we're allowing is the ability to scale up storage without adding compute resources by removing storage limits on Kafka topics to enable infinite data retention in Kafka. So again, I'm going to give you a quick look here of how this is performing inside of the UI, but really this is allowing you to configure at the topic level both the storage size and the storage timeframe periods that can be set all the way up to infinite. So allowing you an easy, easy means of controlling how much data can be stored within Kafka up to an infinite percentage. With this, there is no longer need to be making that trade off for cost of retention or new use cases. We're going to be scaling that automatically for you. With this feature, we've seen that customers can sometimes see a 70% reduction in their storage cost compared to running this themselves and provisioning base management within other Kafka services. So let's take a look here at what this is going to be looking like inside of the UI. One thing to note here is today we have infinite storage available for our AWS and our Google Cloud clusters. We're actively working on bringing that to our Azure clusters as well. Here's a quick run through. And as I said, we're provisioning this at the topic level. So within my topic, I have two options. It is for my time of retention that should be allotted and allowed for within my topic. This is the size of ultimate retention that is available. So this is scaling up here. It's a quick view, but this is a real view of a cluster that is going up to one petabyte of data stored within the individual topic. Again, my retention time, retention size, setting both of those to infinite storage is now unlimited across this topic. And we're able to see this within the same monitoring dashboard here. This is where my cluster in storage is going up and up and up with this adoption. It's eventually hitting just over one petabyte of data. So when thinking about these two tenants of the platform together, the elastic scalability of the platform, the ability to infinitely store data within your Kafka cluster really established Kafka as the database of record for these use cases. So both of these two tenants in a major way are enabling customers to maintain focus on on value, maintain focus on building their their applications and watching them, rather than focusing on these non core infrastructure infrastructure related activities. So I wanted to pull up a nice customer quote here that really exemplifies this value and what we're offering customers so al.com is a major online retailer out of the UK. So Confluent gives us the tools we need to drive innovation before Confluent cloud when we had broker outages that required rebuild, it could take up to three days and different developers time to resolve. Now Confluent takes care of everything for us sort of developers can focus on building new features and applications. And the reason I pulled this quote up today is, this is really, again, it's exemplifying exactly what we are trying to provide our teams with and that is a means of leveraging Kafka as an incredibly valuable and powerful technology to their business but only in a way that is most directly beneficial to their customers and their business focusing on building value value while we take care of the rest. Next I want to talk about the global capabilities of a cloud native service for Kafka. And making sure that, regardless of where you are running Kafka today you are set up for success with your near term expansion of use cases and your future unknown changes to your business that may require additional uses of your data in places where it's not necessarily required today. So with Confluent you're able to deploy Kafka how and where you want. We have obviously a self hosted version of Confluent you're able to sign up directly through us. We're also available across each one of the major public cloud providers and available for provisioning within their own marketplaces so within AWS Azure and Google cloud. Within each one of the three these three public clouds we have super strong coverage across the region so 20 regions within AWS 16 regions within Azure and 22 regions within Google cloud. We're constantly and always updating and expanding across these regions as well. I think we're pretty actually well covered today, but as new regions are being introduced, and from time to time, any outlier regions are being added into this mix on a regular basis. So you can start your journey here with Confluent. However, you also have the means as I mentioned to sign up via marketplaces. We see some customers going this way who are already established with a cloud provider, they may want to, you know, align with an existing commitment that's already in place and then draw from that align their, their bill with the existing monthly bill that's already running with that provider. When we think about a global offering and with relation to a cloud native service here at Confluent, we're thinking about this much more so than simply the availability of a service across these clouds, but rather thinking about how we're able to actually assist customers in moving their data from these different locations where it's needed. So I wanted to step in today and talk a little bit more about cluster linking and cluster linking is a feature that we have built that allows for creating this consistent data layer across your cloud. Enabling you an easy means of sharing data between fully independent Kafka clusters, whether those be independent across geographies within a single cloud provider or across individual cloud public cloud providers or possibly even from on-prem self-managed environments into your public cloud environments. I'm really excited about this because cluster linking it requires no additional infrastructure. So when thinking about options that are existing today to link cluster such as mirror maker to, for example, oftentimes this is going to require the standup and provisioning of a new cluster in order to manage this implementation. There's no additional infrastructure that's required here we're able to directly link clusters across these different locations. And secondly, with cluster linking we're able to fully preserve offsets. So we're creating perfectly mirror topics from a source destination over to the destination cluster. So this is a perfectly mirror topic existing between the two. It's really allowing us to bypass the risks of potentially over processing and reprocessing data or missing data. So making it very, very easy to run these processes for data sharing data movement, potentially for building out high availability or disaster recovery scenarios planning for, let's say a region specific outage within a cloud platform, having an easy means of failing over to a cluster existing in a new region this is going to be much much more feasible with cluster linking. Secondly, this is an easy means for bridging the gap from on-prem environments and bridging over to cloud. So with cluster linking if you're running on-prem today and you're thinking about making this move to a cloud native service with confluent or any other provider, this is a super easy means of doing so and bringing that data over into one of our clusters in a simple means. So really with cluster linking, which is currently in preview, what you're able to access is realtime data globally across your public and private cloud clusters that are syncing in realtime. So we're super excited about this and really the easy means of data movement across regions across clouds and across environments and how this is equipping our customers to build new use cases to establish a higher degree of availability of their services and just move quicker with what they're delivering as related to and on top of Kafka. And lastly what I want to be touching on is the complete factors of a cloud native service for Kafka. I mentioned earlier when we're thinking about equipping customers with the tools and the service that they need to be most effective in working with data in motion and building realtime apps. Yes, we understand and know that a major component there is really offloading that burden, the operational burden of Kafka over to a cloud native service, but also it's important to land yourself on a platform that is in fact going to go well beyond Kafka and equip your teams with additional tools and components that they need to be building on top of Kafka and launching faster. So with Confluent, what you're gaining access to is a complete data and motion platform, again, going well beyond Kafka. So the first part of this is prebuilt integrations. So enabling you and your teams to instantly integrate clusters to the most popular data sources and syncs within the Kafka ecosystem. I'm going to be stepping into a quick overview of what those integrations that are that are available today were always expanding upon those literally last night as I was building these slides I had a couple of new ones that I needed to be dropping in. So through that, these are fully managed connectors in addition to a larger array of self-managed connector options that we also have available. Next year is gaining access to data governance features. So making data backwards compatible and future proof with schema management and additionally with schema validation tools. We have large investments here into data lineage and data catalog. So really think about data governance. As a big picture, I'll walk you through where we are today and give a sneak peek of where we're going in the future here as well. So, you know, oftentimes when we're thinking about Kafka, we're thinking about event streaming. What we want to also be thinking about and equip our customers with is the ability to not only absorb integrate and distribute that data but also enrich it and prove it with stream processing on the wire within Kafka on the exact same platform where you're already working. So we're able to do that with KSQL DB. This is a SQL syntax offering for working with and processing data. We'll take a look at that. And lastly, having access to enterprise grade security. So ensuring data confidentiality controlling access management, having access to an easy means of monitoring activity across your deployment and your Kafka architecture is incredibly important. Across the board here. This is again what we would consider non core functionality for your businesses likely things that you should be seeking you should be expecting to have access to when shifting to a cloud native service for Kafka. So thinking about those prebuilt integrations. I want to give a quick snapshot here of some of the integrations that we have available today. As I mentioned, these are fully managed integrations that are existing within Confluent Cloud. So configuring these and adding these to your account is literally a couple of clicks inside of the cloud UI. I mentioned to you guys that last night I needed to make a couple of additions to the slide. I think it was Wednesday of last week, we made our data dog metrics sync connector generally available so that is net new a couple of weeks prior. So we just went GA with our source and sync connectors for MoncoDB Atlas. So this is a major major area of investment. We already have a massive amount of connectors that are available today these are across the best and breed cloud providers the public clouds themselves and other options. We're always adding to them. And super easy to use these one comment I want to make as well though is across the platform in general. And I think most of the folks here probably your teams might be Kafka experts already today and what yourself managing. If you have new folks who are coming on or anybody who's looking to learn within Confluent Cloud we also have a major dedication to folks who are new to Kafka Kafka folks who are looking to learn. So you can raise your hand inside of the product and say hey I'm looking to learn. I'd like some tutorials I'd like some step by step guidance for how to be using the service. We have that available within the product for everything from cluster creation through to use of case equal to be for development of stream processing out. We also have it for steps like configuring your source and sync integrations. And using these is literally step by step inside of the tool where we are walking you through exactly what you need to be developing and moving quickly. One that I think is kind of interesting in this group that I was excited to find when I joined the company is the data gen source. So when we are in development stages when we're building a proof of concept. Oftentimes it can be challenging to actually have access to a data set. It can be really time consuming complex to be building out mock data sets if that's something that we need times for demos this data gen source here is pretty interesting it's a really easy means that we've built out for just a short amount of time out that that mock data source and having a full fledged data source available instead of data for development stages and getting up and running. That's there I want to talk a little bit on data governance and really what we're offering today in terms of schema management. So schema registry is our fully managed offering for creating editing, viewing topics schemas and comparing them. And has been for some time inside of the product that's one of our most widely adopted and popular components that we have running most customers are using this feature for for managing their their schemas, we just made an update to our schema registry offering as well and adding validation. So I think it was maybe a month or two ago, we added in schema validation so this is configured at the topic level it's managed broker side, as we see messages coming through. So we have a schema that is our schema ID that's going to be tied to those incoming messages, we're able to configure at the topic level, a check to validate that the schema ID that is associated with the topic is in fact matching schema ID that is associated with the message. So it's not only an easy means here of creating editing and managing these schemas, but furthermore being able to configure and control the topic levels that all messages coming in are only written if they have a matching schema. And as I mentioned, we're making a lot of investment in this space for data governance in general. Currently, we have a an early, an early access program open for our latest developments for data governance and that is for data catalog and data lineage so expanding beyond really schema management here and what we think about as data quality moving into data catalog and easier means of sorting organizing and and seeing data across our deployment and data lineage, really a rich UI approach to seeing movement of data across our Kafka deployment both of those are in early access today and we'll be working to launch those and to G8 some point in the future. Next I want to talk about the stream processing capabilities that are existing on the platform. Again I mentioned you know Kafka oftentimes is going to be associated with event streaming so bringing data in use cases for integration. ETL moving these into our sinks and in locations where they need to be moving, but obviously there is a major need for working with that data enriching it, processing it, making it more valuable than what we found. What we think here at Confluent is the ability to work in context work on the wire within Kafka and being able to enrich that data. You know while you're already working with it is highly valuable on a platform where you're already residing. So case equal DB does just that it provides you everything that you need to build a complete and and event streaming application, entirely with the SQL syntax. So if you are already working in SQL today and you have that background, you're going to be up and running really quickly and successfully with building out this event streaming database with case equal DB. So super exciting to look into this is also an area where we are constantly making a lot of investment and regular updates. Lastly, I want to talk about enterprise, enterprise, great security and compliance features that you're getting access to when working with a cloud native service with Confluent. So firstly here securing data against unwanted access. So by default, we are encrypting data at rest we're encrypting data in motion we have private networking options that are available between ourselves and the different cloud providers. We've recently made updates here as well so data at rest encryption again is a current by default here we also have options for use of your own keys or be why okay key management. So if you have a key that you're managing elsewhere you want to bring that from let's say, GCP or AWS over to your Kafka deployment with Confluent you're able to do that. We're live with be why okay for AWS and GCP today. Next controlling access to your clusters and resources. We have Samuel SSO controls are available today. We have our back role based access control available for cluster authorizations and allowing easy means of running at scale and providing permissions for access across your deployment based upon individual roles we have four different roles that can be set for users inside of your cluster. We also have audit logs so enabling you an easy means of watching activity across your deployment, becoming aware of what might be suspicious activity we have specialized topics that are built specifically for audit logs that can be consumed in real time in real time and brought to third party sources like Splunk or elastic for an easy means of reviewing these and taking action for any unexpected or unusual behavior. And obviously as a part of a Kafka offering we do have ACLs available for giving more granular user specific permissions within the cluster and at the data plane. Finally, we have a set of compliance and certification compliance certifications that are available here so sock one, sock two, sock three ISO 2700 certifications PCI HIPAA and GPR readiness. Those latter related to our dedicated clusters. The last thing I want to think about today in kind of conversation I want to provoke is, you know how much does Kafka cost your business today, really how much are you spending to to run and manage and grow and evolve that Kafka deployment. And what could that look like, if you were to be in reinvest, reinvesting with a cloud native service like Confluence. It might sound like everything that we just went through in terms of the elastic and infinite capabilities the global offering being able to run Kafka across all the different environments where you are today or where you might be in the future, and also thinking about the complete full platform that your teams gain access to for moving, moving faster building and launching quickly. So that come together coming together like this is a major price tag but when we start thinking about what's actually going into the the cost of running Kafka today yourself. When we think about really the biggest bulk here of cloud infrastructure, the operational responsibilities and an overhead, including within that. Additionally, some folks who are in operations and might be fully dedicated to Kafka as opposed to other areas of the business that are a bit more meaningful. Additionally, support outages other spend that is related to Kafka. When we add all of this together and compare it to the price tag of Confluent cloud, we're working with customers and often seeing that they're able to reduce that total cost of ownership for Kafka. That's 60% in making this move over to Confluent cloud. So really, it's oftentimes a really valuable change for customers when they're able to understand what that cost is today and how they may be able to not only save money but also to de-risk and accelerate with use of this technology inside of their business. So what I want to offer today is really how to move forward and start thinking about this. So that the first option that you have available with Confluent is to request this TCO assessment. That's our total cost of ownership assessment. We have absolutely no obligation here whatsoever and moving forward with Confluent and running through this assessment. Really our goal is to first and foremost work with you and allow you to just better understand today what does Kafka look like for your individual business, how much is it really costing you. And then we can move into, if you choose to a better understanding of what that shift might look like. So we're running Kafka on a cloud native service like Confluence. So, you know, I encourage anyone here who's really just interested in seeing that side by side and just learning a bit more again without any obligation to move forward and request that assessment. It's about an hour's worth of time at minimum to move forward. And there's no obligation. You can take that and step back into your business as you see fit. What's exciting though is that we do have a free trial of course available with Confluent. It's available across all three public cloud so AWS, GCP and Azure. This is something that is available today anyone can be coming in to our service and signing up for a free trial. We're offering today $200 per month for your first three months to get up and running and started with Confluent. But, you know, I want to say thank you to everyone for the time here that you've offered me to be running through this. I really encourage you to start thinking about what is that cost of Kafka to your business today. What is the risk that is associated with that cost with that complexity. How much of that investment that you're making today is core to your business and really driving that high value for your customers versus how much is non core, what is the risk if it's something that's going to be messed up. Is there a better way to be running this technology with love to have that conversation with you and see where we may be able to assist. So, thank you very much, Shannon and team for providing me this time and think at this point we could open up for any questions that might be coming through. Greg, thank you so much for this great presentation we do have questions coming in already if you have questions for Greg, feel free to submit them in the Q&A section of your screen, you can find that panel in the bottom middle of your screen for that icon. And just answer the most commonly asked questions just a reminder I will send a follow up email to all registrants by end of day Thursday for this webinar with links to the slides links to the recording and anything else requested here. So diving in here Greg, how do you manage data pseudonym, pseudomize, I'm going to get tongue tied on this word, pseudonymization, thank you. I don't want some teams to read sensitive data from some topics. Good question so today we are offering ACLs so ACLs is an easy means of providing user specific controls for exactly what what data they're able to access. So that includes down at the data plan at the topic level, provisioning teams with access or no access really just to the data that they need. So that's the top of that today as I mentioned our back which is role based access control. Our initial implementation of our back is at the cluster and environment level. So allowing someone, you know this is just a faster onboarding means of providing these these permissions, it's available at the cluster level today. We're also going to be expanding that and allowing role based controls at the topic level down the road. Thank you. And how can you calculate the cost of your Kafka cluster on the cloud. So, the way that we are offering these today and kind of thinking about the cost of a Kafka cluster is based upon in our I said earlier that we have three different options of clusters that are available we have basic standard and dedicated clusters. Our basic cluster is priced exclusively based upon your activity so data moved in data moved out data stored. And that is it so any of your data being used that is the only charge that you'll see for that cluster. Our next offering is our standard cluster standard cluster is going to be fit for use cases up to 100 megabytes per second and has a higher SLA at 99.95% and in addition to data activity within your account. There is a base fee for that cluster which comes out to about $1000 per month. And then lastly we do have our dedicated clusters. That are going to be running at our highest possible capacity as I mentioned those can be scaled up to about 10 gigabytes per second, as necessary and those are scaled based upon individual capacity chunks that are added to them which we call CK use. And stepping into this and understanding these price points more closely can be found within Confluence website. It's also something that would be stepping into within that total cost of ownership or TCO assessment. Thank you. So, what are the popular use cases for Kafka in banking and financial industry. And really across the board, you know where we're seeing more customers use Kafka today is obviously in the baseline of data integration, data movement and an offloading from mainframe systems into into Kafka and cloud, but where we're seeing more advancement is preparing data for use cases like real time fraud detection, for example so this is an area where we see Kafka being used, being able to understand where we see anomalies in activity across accounts and being able to in real time, trigger those alerts and being able to take action inside of an account. For example, sending a text to say hey, looks like something odd occurred inside of your account. Yes or no to confirm it. The means of actually leveling up and having access to data to trigger that text message being sent is one example of how Kafka could be used. And how will infinite data across data access work, especially with regards to performance considerations. We have these, they set separated so what we've done with the infinite offering is that there is now a separation of of compute and storage. So, within these clusters, where you have infinite storage available, that is separate from compute that you now is add capabilities for expanded compute within your cluster, and that is separate from where data is being stored. So there shouldn't be a conflict there. And how does the deployment look like is it deployed on our cloud using the marketplace or does the data leave our cloud. You have some options there. Data is going to be sitting within clusters and environments that are sitting. Let's say we're running in in your public cloud. Clusters are going to be sitting within confluence and installment of that cloud so within GCP Azure and AWS confluence has our own instance. We are enabling you to really run with just one or many different cluster types sitting within our installment so we have those basic and standard clusters which are multi tenant clusters that you have immediate access to, or within our environment you're able to stand up a dedicated cluster which would be a dedicated VPC or vNet that is sitting on the public cloud but within confluence installment. We also have the means of managing data within a private cloud or on-prem so really supporting the confluent cloud offering is confluent platform. So that is really our software solution that can be used for managing Kafka on-prem. It could also be used for managing Kafka within your own private cloud. So there's a lot of options here and how you can mix and match this. What we've been focused on today though is confluent cloud which would be hosted within our own public cloud environments. Love it. Great questions coming in y'all. If you have questions feel free to submit them in the Q&A section. I see another one getting from the chat I was just about to get to. Can confluent cloud be set up to set up a disaster recovery as a service? Is that a use case? Yeah, this is a use case that we are running today and we're investing in it even further so within the global section the feature that I was focusing on there is cluster linking. And cluster linking is really our solution for data sharing and geo replication across different clusters. So what you'd have access to there for example is an easier means of planning for let's say a region specific outage that's occurring at the public cloud providers. So if I'm running on AWS East, I want to have a cluster and I've made up AWS East. I'm not sure that's actually how they have the region named. Let's say I'm running on AWS East. I have a cluster that is set up for possible failover on AWS West. Cluster linking is our feature that we make available. It's in preview today. It's our feature that we make available though for replicating data across topics from one region to another. So that can be done without standing up any additional infrastructure. Typically, you know, with solutions like mirror maker to there's a requirement to be standing up new clusters just to be managing that movement from one to the other that's not required here. So with the existing infrastructure you already have across those two different region clusters are able to link them and you're able to do so with perfectly mirrored clusters and imperfectly mirrored offsets. So really removing that risk of reprocessing data or missing data. So all that is coming together as really a strong strong and great solution for building out higher availability Kafka deployments and planning for disaster recovery and failover from one region to another in in the event of a region specific failure, which is what we see is most common what customers are trying to plan for. I love it. I keep the questions coming in, you know, it's confluent cloud in conformance with an IST 800-53 controls and part of the Fed ramp, do you know, and maybe a very. Yeah, Shannon, let me follow up on that one to make sure that I do not misspeak on just that very that that specific certification so if that's all right. Can we table that one and we can do a follow up there. Yeah, absolutely. Yeah, I'm, I'm familiar with it myself. So no problem. And can confluent cloud also be used on premise VMware bed for the data that's disaster recovery to the as a service to the cloud. We can make available data movement from public clouds to on-prem environments vice versa back and forth. So sister to the confluent cloud offering that we have we have confluent platform and this is really a software that we're making available today so you could run that on prem and use cases like a source cluster on prem moving to a destination cluster in the cloud or vice versa would be absolutely a part of you know how we're thinking about disaster recovery and then different ways that customers might want to go about implementing that. So, I think we have time for a one more question here. How does use of confluent work alongside existing cloud provider contracts like AWS is it possible to set up confluent cloud consumption to just be an additional part of this existing monthly bill. Yep. Yeah, so we're running through this just to recite. Absolutely. Yeah, so we're trying to make this super easy. The service to consume and bring online we know that most customers that we're working with probably already have some type of footprint with at least one cloud providers so whether it be for AWS or also Azure and GCP. Absolutely can be aligning these separate bills are not necessary so you can sign up for confluent cloud through your respective marketplace in order to unify billing and even leverage existing cloud commitments. So if you've already procured a certain degree of commitments, you have spend that's open for the year you can pull right from that with confluent cloud and you would see us as a new line item on that monthly bill. Thank you so much Greg thank you so much for this great presentation and thanks to Confluent for sponsoring today's webinar really insightful stuff going on here. And again just a reminder to everybody I will be sending a follow up email by end of day Thursday with links to the slides and links to the recording, and we'll get to get you those questions to follow up on to. So thank you so much everybody I hope you all have a great day. Thanks. Thank you Shannon. Thank you all.