 We have another very interesting customer session with Sahi Bindan next up and Sahi Bindan is Turkey's biggest online marketplace. They have a very unique use case and they're deploying several workloads including databases such as MySQL, MongoDB, Cassandra, Clickhouse and Apache Kafka on Red Hat OpenShift. And this supports their whole website and handles all the traffic that they have received on the website. We have Jem Omertag who is a software and data architecture manager at Sahi Bindan and Erkin who is a senior solutions architect at Red Hat and he helps our customers in their development journeys and they're here to shed more light on the Sahi Bindan story. So hi Jem and hi Erkin, welcome to the comments session and thank you so much for being here. Hello everyone. Hello everyone. And so I'm going to, so yes, so you can see their profiles up here and we can just dive right in, I won't waste any more time. So can you tell me a bit about Sahi Bindan? What does Sahi Bindan actually do? Sahi Bindan is a classified platform, classified platform from Turkey. A diverse spectrum of new and second hand goods is sold on the truck platform like real estate, cars, trains, clothing, food. With close to 60 million active users, 30 million pages per month, Sahi Bindan is the fourth largest online, online classified platform in the world, trailing on the great listing USA, Avatar in Russia and Le Boncoin in France. It's also one of the top listed sites in Turkey according to Alexia. Currently Sahi Bindan has close to 1,000 employees, 250 of which are working in R&D department. Sahi Bindan is currently running on more than 4,000 yams in more than 150 distinct rows to distribute between two on-prem sites and a public cloud site on GCP. That's about Sahi Bindan. Thank you. Okay, so as I understand you decided to undergo a massive cloud transformation project. What were the challenges that Sahi Bindan was facing and why did you decide to go for such a transformation project? In Sahi Bindan, we first wanted to modernize our underlying technology stack of the services we provide, enabling product with scale and reusable improvements. As well as we want to reduce time to market, optimize internal investment and establish a more secure and well-governed technology infrastructure. To our understanding, every company needs to be a technology company and should provide its business good means to deliver new services to the customers in a reliable, secure, scalable and fast way. We also want to employ top talent which thrive in companies with up-to-date technologies. Although public providers check all these boxes, we are subject to strict government regulations in the form of a local version of GDPR. That means in some point in the future we may have to suddenly abandon our public cloud extensions. So we are kind of stuck with on-prem data centers. Before 2020, we have built upon our virtualization and automation platform around Debian, Linux and Zen. With some homegrown, Python-based infrastructures code tooling we named Sahi Bindan Cloud Systems. When we need that kind of infrastructure, the modern tools are not very major actually. And this tooling is although very customizable because it's written by us, has a very steep learning curve and do not really attract people who already learned and used open source tools for that purpose. So in fact, we wish to minimize the footprint of self-provision, operated, secret and government-registered components and focus our efforts, engineering effort to our application layer and data layer. We also envisioned that the actual containerization of all our applications will be a multi-year project and we decided to instead approach, we decided not to make a big bank actually. So we decided to go with the lift and shift approach where we keep running our workload on virtual machines and focus on replacing the underlying platform first. With these ideas and restrictions, in 2020 we started a two-phase project. In the first phase, we moved a part of our workload to Google Public Cloud where we diverted 20% of our traffic to keep it alive. And we actually liked our experience there because we are not bound by any kind of hardware restrictions. So like you mentioned, so for the second phase of the project you launched a request for proposal. What were your requirements and your selection criteria for that? In the second phase of the project, we tried to extend the benefits of cloudification of our application versus two on-premise data centers by employing a private cloud solution that mimics a public cloud experience as much as possible. I mean, the limits of technological and financial feasibility. And we therefore released NRFP to private cloud vendors, many private cloud providers including, of course, Reted, Respondent to the RFP. Putting some specific details aside, I think the ideas in the RFP can be summarized as follows. Firstly, we needed a major platform which regard both virtualization and contentization as first class systems. By that I mean we should be able to work our workloads flawlessly on both virtualization part and contentization part and their connectivity should be also seamless. This kind of technology will allow us to shift our applications from traditional VMs to containers in our own pace because that's a big transformation project and we can't close the shop and we are doing this or we can't do that. So we need time for that. And without a big bank, such a platform will enable us this transformation to be as slow, as graceful as possible. We also are looking for a platform where we can apply their second principle, strengthen our security governance as well as improve our operational efficiency. At the same time, we are trying to change the culture in the company where the development team has stronger autonomy. The teams are going to be not only responsible for the development of their products but also its architecture, scaling, monitoring, etc. They should be able to control production concerns like number and composition of auto-scaling policies, selection of databases and other subsystems and test them with ease by selecting them from application catalog and then push those developers and changes to production with minimal infrastructure team involvement. We are also trying to avoid vendor lacking where we can keep the symmetry between our on-prem data centers and cloud data centers. This is also very important for us. Also, lastly, to our experience, operation of large open source Kubernetes installations are not really trivial work, especially the upgrade part is really painful. By upgrading different parts, you can easily end up with a configuration that has never been tested anywhere else before you, so you may be in dark. So we need a vendor which we can trust for support and also we are sure that they have released a very well-tested bundle to us. This is the essence of the RFP. Okay, so I think now we have quite an in-depth understanding of the challenges and what you were looking for. So can we get a little bit technical and see what the proposed solution architecture was and how it essentially mapped to what Sahi Vinden was looking for? Yeah, actually when we got to RFP and analyzed the requirements, we thought OpenShift on OpenStack-based solution supported with self-storage can be a good fit because most of their workloads were running on VMs. And so we build a solution based on that. But after we presented this to the Sahi Vinden, Sahi Vinden CTO stated that they were looking actually more hyper-converged and a unified solution to run containers and virtual machines together seamlessly. So they don't want to invest on two separate stacks for integration, migration and maintenance. So this became an additional effort for them. So that's why we changed our solution to OpenShift virtualization based. So we offered them a bare metal OpenShift supported with OpenShift data foundation to run both virtual machines and containers together with the support of OpenShift virtualization. So when we proposed that the Sahi Vinden team liked this one, even OpenShift virtualization was a fairly new technology at that time, they decided to go on this solution. Okay, and so why did Sahi Vinden choose OpenShift and OpenShift virtualization to get this massive goal over the finish line? As Cem mentioned, actually they were looking for a stable and security-focused comprehensive platform to run containers and virtualized applications together. So I think OpenShift was a very good fit for that. And so we were a leader in these enterprise Kubernetes. And so it was very popular in also in Turkey as well. And Cem also mentioned, as you remember, they were trying to improve their developer productivity, change the culture in so they can able to try and test new applications very easily like in a cloud environment. So I believe these were the pillars that Sahi Vinden chose us. Okay, that's very interesting. And so what's the current architecture? Can you talk us through that? How is the traffic being distributed? How are you delivering this project? Because it's so massive. Okay, let me start with how we delivered this project very quickly. So we started the project on February 2021. We made the bare metal OpenShift deployment in Sahi Vinden Ankara Data Center. So Sahi Vinden's cloud system, you know, they were using the provision, the virtual machines and the configure their other infrastructure components. So they integrated their Sahi Vinden cloud system with using OpenShift virtualization APIs. They did it very quickly actually. After that, they migrated more than 1500 virtual machines. So these workloads include some Java applications, Sahi Vinden's API gateways, even Active Directory and many stateful applications. I think that was the topic of our today's session. So I think Jen will mention much more, but we onboarded MySQL servers, MongoDB servers, Kazandra and Kafka clusters. So on April, this went live. And after that, these clusters started to get data synchronization traffic from the other data centers. After getting that, Sahi Vinden team actually started to route some image and content traffic, which is the majority of the traffic happening in Sahi Vinden. You know, when the people upload their photos of their houses, cars, so these are kept in a kind of CDN, in private CDN network. So these kind of things were started to serve on OpenShift. But on July, 2021, Sahi Vinden routed 80% of their whole site traffic to OpenShift now. So now we are working on the other data center in Istanbul. So OpenShift is set up there, and Sahi Vinden team is migrating that one as well. Currently, as you can see, 80% of Sahi Vinden traffic is routed to OpenShift clusters in Ankara. And 20% is going to do Google Cloud. They are currently Istanbul data center, legacy Istanbul data center is decommissioned. And so we are now modernizing the other data center now. Okay. And so where do we see this going in the near future? Yeah, I think in a short time, the Sahi Vinden team will complete the migration of other applications in VMs to the Istanbul data center as well. And after that, each OpenShift cluster will be handling around 40% of the whole site traffic. So in the case of an emergency, they can able to switch traffic very easily between sites. And after that, we will be starting on actually modernizing the applications and onboarding containerized applications on OpenShift. Okay. So now can we talk a bit about the workloads as well that are being deployed on OpenShift? What are each of the components being used for and what applications are being supported? Sahi Vinden has a monolith, a large one like millions of code, which is the backend business services. And two more applications and they are a bit more lightweight. RAL is the source access to our API gateway and Vinden is our web application. These applications by themselves span like close to 600 machines on each data center and majority of the application lies there. There are many small microservices all orbiting this environment, but majority of the work is there. And what we did was, as I said, we don't want to tackle two problems at the same time. And we just did a lift and shift operation where we moved, they were only running on VMs, we moved to OpenShift container, OpenShift visualization. For middle where we use mainly memcache, we use memcache because we have to use cache. It's our application cache. And as I mentioned, we have an internal CDM, which mainly runs Varnish, and it serves its contents in a cache like close to one metabyte image. We also have to use Elasticsearch because we have to search stuff in our classified site. So at least 100 VMs are different Elasticsearch clusters. We again have to use Kafka for event streaming and inter-process messaging. As far as databases, we use MySQL, it's the one we treat as absolute source of truth because we also have, because it's transactional, we also have Mongo, by the time this decision was made, Mongo doesn't have any transaction support. So more like ephemeral data, like login information or the kind of data, if you lose, we don't cry, that kind of data lives in Mongo, but it's reliable actually. In Clickhouse, we contain user activities. It's kind of analytical database, but it can handle very big queries on big data really fast, and you can actually connect your front-end application to Clickhouse. And it's much faster in that regard, much faster than MySQL in those type of situations. And we also have to use Cassandra. It's the impression count when somebody clicks on an unclassified, people can track how many clicks made on them, et cetera. All of these applications run on their own VMs on related OpenShift virtualization at the moment. And we are in the process of modernizing, now it's time to modernize the applications themselves. We started recently microservices project where we attack those monoliths and separate to smaller parts so we can run them on containers instead of virtualization. That's very interesting. And so what was your experience like of deploying these workloads on OpenShift using OpenShift virtualization? How did you migrate them on? How was it? Our infrastructure code tooling actually allowed us easily to add related OpenShift virtualization as a virtualization provider. And we made this change. I mean, we already were able to set up a data center really fast. I mean, the migration of data, et cetera, those are different stuff. But I mean, the actual provisioning of the data center was really automatic and fast on Zen. But we added related OpenShift virtualization as another provider and it was as easy as our old way. So this was our approach. As far as running databases on OpenShift virtualization is concerned, we didn't feel any difference. I mean, it's not getting any worse. It's better actually. It's more performance compared to Zen. Other than that, we haven't used the existing databases from the application catalog where we don't have to operate ourselves. But with microservices transformation, we expect the developers to include their own databases from the application catalog. And then they can develop whatever they want and push them to the production where they themselves decide on the size of the database. How many CPUs, how many replicas, it's topology. So we are hopeful about the future. Okay. Okay, I think those are most of my questions. Thank you so much, Gem and Arkan. This has been really excellent. Thank you for going over things in such great detail. Thank you very much. Thank you for spending this time with us.