 Hi, this is Yoosafl Bahatia and welcome to TO4. Let's talk and today we have with us Yannif Benkhem, co-founder and CEO of Memphis Dev Yannif. It's great to have you on the show. Thank you very much. Great to be here. It's my pleasure to host you today. I would love to know the story of the company. So talk a bit about what led to the creation of the company, what problem you're trying to solve for the whole ecosystem. Data streaming in general is hard and a complicated domain and within it there is one major component that really the engine behind data streaming and a key component in the data engineering landscape, which is a message balker. It started from asynchronous communication or enablement for asynchronous communication between microservices and today it's really the backbone and the engine behind all of the real-time data streaming, event streaming, real-time processing, real-time pipelines that we know today. And we thought that the core in landscape, the core in technologies that the market offers us today as developers and data architect and data engineers don't really make sense for us and really reveal multiple challenges like management overhead and scalability and hard troubleshooting. Observability is really difficult to get when you're working in an asynchronous and in a high velocity data streaming environment. So we thought that it's time to disrupt this industry and rebuild a new one from the ground up. And when you talk about streaming data or real-time data, what are the either industries or use cases they leverage because it's not like real-time is for every use case. I mean, of course, ideally we want that. What are the specific use cases or industries that look at streaming or real-time data? It's really built out of multiple factors, not necessarily just for a specific use case, but we definitely see a lot of the use cases or the processing that we did using batch methodologies, moving or switching, transforming into a more streaming methodologies due to the growth of data. We also see a lot of use cases that are built out of event-driven architecture, both because of the economic environment. We want to utilize resources only when some events happen and we need to trigger some action based on that and not just keep some server or resources alive and waiting for some event to happen. And the third part, I think, is that we want to move fast. We want to learn fast. We want to learn our ML models faster than learning a patch of data and then run the training on that. We want faster responses and we want faster answers. So I think that real-time is a combination of all of those situations and factors in the industry and in the world. You folks started an open source project in April. Before we talk about that specific project, I also want to talk a bit about when you look at Memphis Dev, how important is open source for you folks? I always start by saying that we are an open source first company, an open source first product. We grew from the open source. We helped different open source in the past. We were contributors from different open source projects. So it was really naturally for us to open our core product for the community. It's also allowing us or enabling us to grow out of the community and not just based on our co-founders' mind and vision, but into that comes the entire community and contributors that surround Memphis. So that's a very core belief within our company and our product itself and I definitely recommend it for every product to at least take some portion of it and give it back to the community and let it expand by it. Now let's talk about the open source project Memphis that you folks announced or started in April. So we released our GA version on April and in the past we released a beta version on May 2022 and the GA came out on April 2023. And yeah, we had up until today above 80 contributors, over 5,000 deployments, 2.7K stars on GitHub. So the developer and the data engineering community is really showing love to Memphis Dev and showing its support and probably we're solving some critical pain points that the COINT community have. And yeah, and above that I think the first and most important metric that we were able to gather by the community until today is that the time to success of Memphis or fusing Memphis from deployment to data ingestion, which for us is like the most critical KPI that we measure is on average five minutes, which is really disruptive and change everything that we were familiar today with a guard to data streaming and other message brokers. Now, when you look at open source projects or any open source code bed is great. It helps with day one, you can get started with it, but the real challenge starts on day two when not only you need, you may need support, you need update, you may upgrade. You need additional features and not every company can afford to have developers who have inside out understanding of the project. That's when a lot of commercialization or support behind that open source project comes at play. So talk a bit about what you folks are doing to kind of support this project so users can use it knowing that there's a throat to choke. Yeah, so we usually when we work with event streaming or event streaming at scale event processing day one operations is usually the management overhead, the maintenance, the scalability, the observability, the client responsibility for everything the wrappers that we need to build. But when we switch to a managed version of event streaming or a message broker, usually the day one operations is out of the way. But then you basically need to worry about the day two operation and the day two operation is really the reason why we brought message broker into our environment in the first place to build some real time use case to build some streaming pipeline to actually do something for our product or our infrastructure with the message broker itself. And that's really the magic of Memphis I would say on the day two operations, definitely on the day one, zero operations, everything come out of the box. But the day two operation is where the magic and the differentiator of Memphis comes into the play, because basically everything is in bed, or all the features that you need in order to build streaming pipelines and real time features are within the platform itself. And instead of you adopting yourself and your application to your event streaming platform, basically Memphis do it for you and adopt itself to your application. You folks are also announcing Memphis Cloud. Talk a bit about once again how this is going to help customers to leverage some of these open source technologies. And once again, they should focus on their own business, not worry too much about this plumbing, you know. Exactly. I mean, worry about extracting value and extracting insights and pouring data to wherever it needs to reach and less about the engineering and as you said the plumbing behind. That's definitely the core value of Memphis and above the developer experience ease of use and really helping developers to reach the day two operation or reach production super fast and in a reliable manner, definitely. Another thing that I usually like to add in regard to Memphis, because usually we like to categorize products and we called ourselves, for example, an alternative and alternative usually means we're replacing something. So we're not necessarily needs to replace your existing event streaming technology, for example, Kafka or others, we can definitely and we already doing it with multiple enterprises and customers. We co-live next to your existing event streaming platform or message broker and we really enrich or bring the data operations that I talked about that Memphis is so great about and augment existing events streaming platforms and really take the data operations out of Memphis and put it on top of an existing technologies. Can you talk about some of the core features components of Memphis Cloud? Memphis functions, Memphis connectors and multi-tenancy to support all of the SAS platforms and SAS architectures that really ask for a true multi-tenant environment and it will be able to support their massive scale up until 65,000 tenants per account. So that is a huge and highly requested feature by the industry. Last but not least, the multi-cloud approach. I think that after AI, the next innovation in the industry, in the tech industry would be or the next big movement would be multi-cloud strategy and multi-cloud approach. And it's one of the biggest challenges definitely when you're talking about data streaming and data in massive scale and Memphis is going to enable that with its cloud by day one. There are certain use cases. It could be security or depending on, you know, today we live in a data driven bird. But how much demand do you see is there or you feel that a lot of industry, they should be leveraging streaming data because we are all collecting a lot of data but we need to extract value from it. Where do you see VR when it comes to stream data and maturity of the ecosystem in the industry, understanding education? Actually more than ever. It's really interesting to see the changes over the course of the last two years. We saw a huge boom and explosion in the industry in regard data management and data movement and data governance and data quality. And I think that the outcome that we see today because of the advancement and the movement that we had in the data domain was really the enabler for what we see today as AI and GDPT and all of the other machine learning functions and implementations. So I think that we should only see it grows and definitely get more variants out of it. So definitely that's the golden age for data right now. You folks just came out with Memphis Cloud announcement today. But if I can ask you, what are the things that you are folks are working on that are we should be looking for? Hey, these are the things we can expect from Memphis this year? We cannot reveal it yet, but definitely we're going to do or we're going to bring a game changing feature into stream processing that will combine AI into it. If we talked about an average time of five minutes from deployment to success or to data ingestion, so we're talking about fewer minutes than that with greater outcome and greater insights out of the stream data. So worth waiting. Excellent. Yeah, I look forward to chatting with you about that. Yani, thank you so much for taking time out today. And of course, talk on that is also the larger streaming data whole ecosystem. Thanks for sharing those insights. And I would love to chat with you again when you come up with an announcement that you just mentioned. Thank you. Thank you very much for your time. Thank you. It was a pleasure.