 Hi and welcome to this webinar. My name is Jonathan Seelig. I am the executive chairman and co-founder of Ridge and I'm here with Nir Sheffi who is also a co-founder of Ridge as well as the co-CEO and CTO of the company and today we'd like to spend some time talking to you a little bit about the distributed cloud paradigm which is an alternative to the centralized cloud model that most of us are familiar with and we also want to talk about how managed Kubernetes powers this model of a decentralized distributed cloud and makes it possible to deploy and scale cloud native applications virtually anywhere in the world without some of the latency and data residency challenges that we currently see when we look at the large public clouds. So let's dive into this and talk a little bit about what this new paradigm looks like and how it works. Great, thank you, Jonathan. As you know, most public cloud services today are provided through centralized cloud platforms such as the big hyperscalers and clearly these work amazingly well and they provide many services that I'm sure many of you are using today but we're seeing that sometimes the centralized public clouds live what we call coverage gaps. In other words, it can't provide the level of service that you need. For example, if you need your workloads to run in a close proximity to your customers and the public cloud is not there, that's a physical coverage gap. More and more applications and use cases need to be close in close proximity to end customers and offer good performance. As time goes on and we will, new communication technologies such as 5G, we're seeing more and more demand for high throughput and low latency. There are also data related coverage gaps. Another reason why centralized cloud is sometimes not a good fit for applications owners. Due to data regulation and data sovereignty issues, you may need to save data in country. In many situations, public cloud is not good enough. Lastly, there are commercial coverage gaps. For example, architectural decisions that you already been taking or existing relationship with local data centers. So for all of those reasons, the centralized cloud may not be sufficient to satisfy all the cloud needs of many application owners. Neil, that's a pretty good summary of what we've been talking about a lot inside of the company at Ridge. It's all points to this need to rethink the paradigm and ask this question of, hey, why does a cloud need to be centralized? The world is really big and the demand is huge from all sorts of different geographies. How is it reasonable to think that the hyperscalers are going to be the complete answer for every single application owner's cloud needs in terms of coverage, geography, network? Although the large clouds have amazing economies of scale and amazing capabilities, the locality and the distribution in many, many cases, it can offer benefits which may be outweigh the size and scale of the hyperscale clouds. And so the benefit of being distributed can be very, very significant depending on the application type and the application requirement. The ability to deploy and to scale where you need to be, even the ability to add a pop if one isn't available in a particular geography that you want to run in, which is something we've seen from some of our customers at Ridge, can be a really big deal. And of course, the distributed paradigm of infrastructure is also very, very relevant to both hybrid cloud and multi-cloud models, both things that are very much in active conversation with a lot of enterprises and a lot of companies out there. It's become a fundamental part of any company's hybrid or multi-cloud architecture to understand if they're going to need some effectively sort of distributed cloud capability. You know, Jonathan, now that we've discussed the public cloud coverage gap challenge and we've raising the idea of a non-centralized cloud or what we call a distributed cloud, let's discuss how it's done. Our vision when we founded our company was that we wanted to run our cloud on any underlying infrastructure or any heterogeneous physical servers on any underlying IaaS or physical or virtualization systems or bare metal machines. So we could achieve a cloud that hypothetically could be expanded to hundreds and thousands of locations or regions, again lack of a better word, and which we could offer fast integration and capacity all over the world. To users, it would feel exactly like a public cloud that they're used and familiar with. For this to work, we've built a platform based upon cloud native building blocks. The first of these are a fully managed Kubernetes solution which enables users to run whatever they would like to run, since it's based on a de facto spec of deployment on a cloud. That's Kubernetes. Any application on AWS, GCP or Azure running on EKS, GKE or AKS can run on ridge without needing to change a single line of code, except that with ridge it can run on hundreds and thousands of locations. The second building block is our container service, which allows users to deploy containers. If you don't want to have the full blown Kubernetes or sometimes you don't need that, then you could just say, I want this image and run it in hundreds and thousands of locations, and we take care of all of the heaviest lifting of the physical infrastructure. And the last building block that we've deployed is our fully object storage, fully compatible S3 API object storage solution that has, again, fully compatible, and in this case, the de facto spec is S3. But the difference is that it could run globally in hundreds and thousands of locations across the ridge network. All this works on top of any underlying physics. Ridge doesn't own anything. We use an amazing data centers and telcos that are already out there. That's why we can scale and endless public or private regions. But most importantly, beyond all of those capabilities, ridge is a cloud. You engage as you would do in any modern public cloud for a simple online interface. As a customer, you just need your credit card as you go. You don't need any prior commercial agreements with any of our data centers around the world. Ridge distributed cloud enables developers to describe the required resources as they deploy their Kubernetes clusters, container or object storage. And as a managed Kubernetes service, the distributed platform will adjust workloads automatically by spinning up computing instances wherever they're needed. I will soon show you a demo of how it works. So, Nier, before we get into the demo, I think one of the things to talk about here is that the flexibility and the functionality that you've described is becoming more and more essential with the increase in cloud native activity and cloud native application development. And to be able to do this anywhere that it's needed. The big promise in cloud computing was always the abstraction of infrastructure complexities, meaning that developers were going to be freed up to focus on writing great code. But I know that in a lot of conversations that we've had with folks, we find that today's advanced containerized microservices based and cloud native applications are often so complex that developers are finding themselves spending actually a lot more time dealing with infrastructure configuration and design than sometimes even then with coding. So it's kind of not what the promise was going to be here. You know, very true. As a de facto standard for container orchestration, Kubernetes plays a role as an enabler of cloud native application deployments, offering a huge flexibility in moving workloads between environments. However, the full potential of many cloud native application often with strict latency and throughput requirements cannot be realized until they can be deployed anywhere to ensure superior performance. And we're seeing applications today that need that performance. For example, we have a customer offering a remote desktop and a VDI and they need extremely low latency. There are also many apps that are emerging connected vehicles and all kinds of AR VR applications, which proximity is essential. And of course, the future. Who knows what apps would look like in five years. But I bet they'll be very dependent on low latency. I think that's for sure. We keep seeing more and more as we engage customers that more and more applications are being developed to become latency sensitive and they care a lot about that. Before NIR starts a demo of how managed Kubernetes is used in the distributed architecture that we're describing here, I want to discuss just a couple of our current deployments with customers who are running applications on Ridge. And these are applications that really were made possible because of Ridge's distributed cloud paradigm. So the first one that I'll describe is the provider of browser isolation. You're trying to mention this just a minute ago. Browser isolation and cybersecurity solutions for that. The solution is basically the idea of replicating desktops to make sure that they are malware free. And as you can imagine, end users who are using these remote desktops can't sense any delay or lag in their browser and feel like this experience is a simulated desktop experience. The company that we were working with on this deployment has told us that when they had a set of users in Paris connecting to a hyperscale data center in Frankfurt, the latency level in that communication path was simply unacceptable. People using that virtual desktop offering felt like there was lag. And so the ability to find a distributed solution that gives them a point of presence right close to those Parisian users was critical for the functionality and the customer satisfaction of their offer. Another deployment I can describe, which is a pretty interesting one, is we have customers created an eyewear simulation software that enables you to try on glasses virtually through an app. It's a large omnichannel eyewear retailer and users love this functionality. But this functionality depends on having GPU in proximity to that end user. All of this is being dealt with with Kubernetes as the management platform for these workloads. And the customer's workloads are running on local data centers in lots of different places. Moving this capability to a public cloud really wasn't an option that was going to be effective for this company. It would have added a lot of latency. It would have degraded the app experience. And so they came to us because they knew that we had the ability to easily give them these cloud native services with GPU on the back end in lots of localities where they needed that capability. So those are just a couple of examples of places that we've found this real embracing of and in fact this requirement for a highly distributed cloud in order to make these specific applications that customers have come to us with work. Thanks, Jonathan. And now I think it's a good time for us to begin a demo of how the platform works. Good. I'll stop sharing this screen and Nir will bring up your desktop and let you take up folks on a tour of how the RIDG cloud operates. Cool. Thank you, Jonathan. So that's RIDG. That's the UI and obviously there's an API. You're welcome to go ahead to our website. There's a link to our developer portal. You'll be able to see all of it. There's a fully restful API so you could download our open API spec and try it out. And this is the UI of the end user, developer, DevOps, IT and whatnot that you could interact with our cloud. So once you log in and we are connected to any external identity providers such as Google or GitHub and Microsoft or external offer services, you are in a context of an organization and a project here. So we are handling different, you know, all of our identity and management system. You could manage members, give them permissions and so on, very similar to what you might find on a public cloud. I'm not going to get into too much detail here in this demo, but bear in mind that we do provide that out of the box. What you see here, those are data centers. We are connected currently to hundreds of data centers in production. We could connect to more and more locations as time go and as customer demands. We can connect to public data centers similar to what you might call or use as zones or regions. And you could see here that we show those public data centers that Rich has integrated and has commercial agreements with to you guys, to the users, to the end user. As you could see here, we don't hide the fact that it is operated by a specific data center provider. For example, this is operated by catalysts from New Zealand, and you could see everything in a transparent way, like certifications, hardware requirements, obviously the location and pricing here. So you could see pricing here, for lack of a better word, those are the instance types. But as you might imagine, each data center, each location has their own heterogeneous underlying physics, heterogeneous underlying infrastructure. So we transparently show it to you. So you could see here different providers, everything is in a transparent way. So you could choose the best location, the certification, best SLA and obviously best price. So for example, if you could see here, this is one of our partners in Hong Kong. And you could see here that the instance price is a little bit different because they offer an ability to have flexible resources. So you could do some cool stuff and not just use specific instance types. And obviously pricing is a little bit different because it's priced by a CPU memory or storage here. And you could see and get all information out of here using our UI or API. So those would be public data centers that we manage. As you might imagine, we are also able to connect to on premise or private data centers. So if you, as a customer, have some internal data center private installation in your own data center or on top of one of the data center that exists out there, we can connect to that and connect to any underlying IS technologies based on VMware, OpenStack or whatnot, you know, all of the different kinds of flavors and versions. And it becomes a pop in our system or a region in our system. Obviously fully private to your organization. So nobody else, it's a fully multi-tenant system. So nobody else is able to connect to that. And you could deploy anything that you could deploy in the public regions using Ridge. So those are public data centers. We also offer a non-premise solution that we can connect to. So that's the data centers. On top of all of those data centers, we have developed web services. So we take legacy infrastructure, like basic IS solution and turn it into a fully cloud native web services. And our flagship, as I could show you right now, is our fully managed Kubernetes solution. We offer a fully managed Kubernetes solution, same features, same capabilities as you might find on AWS, GCP and Azure, EKS, GKE or AKS. The only difference is that our solution can run in hundreds and thousands of locations across all of the data centers that we are integrated to. We made a lot of effort to, as you will be able to see, in making this a very, very simple experience to onboard. So you will be able to hopefully see that we could, for example, spin up clusters in a few minutes, like three to four minutes. We could manage them and we managed the cluster end-to-end by auto provisioning, auto scaling, auto healing, auto upgrades, and obviously we manage all of the underlying physics, like load balancing, persistent volume, and so on. So let me show you how easy it is to create a Kubernetes cluster in one of the ridge points of presence around the world. And again, this could be in hundreds and thousands of locations. So let's call this cluster a demo. We support a high-available and low-available control plane, and that means the amount of master nodes. Obviously, if you don't need to be high-available, like for development or QA, you can uncheck this and we only create one master. We support Kubernetes versions as a part of CNCF. We are a member of CNCF. We comply to all CNCF conformity testing. That means that we are fully Kubernetes distribution and fully have full certificates for hosted providers in the same way that AWS GCP or Azure has. So if something runs on Kubernetes, it can run on ridge seamlessly. You don't need to change one line of code. Now we could choose the location. So let's choose a location. I don't know. Let's choose something, for example, in Paris here. So as you could see, this is one of our partners, Orange in Europe. So I could choose this one. And if I choose a node pool, for those of you who are not familiar with the terms, node pool is a group of worker nodes. Worker nodes, those are machines who actually do the work. So I could give it a name. We support fully autoscaling capabilities. That means that we could say I want a minimum of two worker nodes in this node pool and a maximum of maybe three nodes. And we automatically scale it up in case Kubernetes cannot allocate a pod since there's lack of resources. So we choose a node and we scale it up automatically. So in this demo, I'm not going to autoscale anything. And then as you could see, I chose a location in Paris and we'll be able to see that all of the resources were propagated here according to the location. So for example, here, this is the instance types or the ability to run and in this specific location. If I choose, for example, another location, let's say I choose the one in Bangalore here, we'll see it's a little bit different because this one offers more flexibility to the instance types. Let's go back to our data center in Orange here. And let's choose like a small machine with two CPUs and four big earbuds. As you could see here, there's estimation of cost. You could add labels and things to this node pool and add more node pools and different sizing, different capabilities. But basically that's it. Once I press create, you could see it back, relax. It takes about three to four minutes. So we're pretty fast in auto provisioning the cluster. And we create a fully isolated cluster here, similar to a VPC, create an isolated VLAN. We create the machines, each machine, we install operating system on the machine. As you can see here, it's already allocated and not gateway IP for the VPC. So we install Kubernetes, certification, security. We configure all of the underlying physics. So you guys don't need to. We configure load balancing and persistent volumes of everything. So once the cluster switches from creating into a running state, that means that all walker nodes has been provisioned correctly. And all walker nodes and sorry, walker nodes and master nodes are in a ready state. So they could accept incoming traffic, sorry, deployments on the and start deploying an application. So that's auto provisioning. As you could see here, this is run in Paris right now on top of one of our partners, Orange. It's a high available, meaning free master nodes. You could see here at the node pools, you could see there are two walker nodes being created right now. So that's auto provision. We support auto scaling for each and every one of the node pools. And we also monitor the integrity of the cluster 24-7. That means if one of the nodes fail, for whatever reason, we know how to auto heal it. So we know how to gracefully kill the unhealthy node, create another one instead until you join the cluster. So you guys don't need to wake up at 2 a.m. in order to fix the cluster. So then everything is done seamlessly to the end customer. So we showed auto provisioning, auto scaling, auto healing. We also have auto upgrades. So between Kubernetes versions, in a click of a button, you could go ahead and upgrade. Once it is upgradeable, it will appear here on this menu and you could click it and it will be upgraded. And we do intend to release version 22 and 23 in the next weeks. We also take care of all of the underlying physics. That means that we know how to configure everything from load balancer to persistent volumes. So you guys don't need to. The only thing that you as a user need to do is deploy your app. What I'm about to show you right now is once this switches into a running state, hopefully soon, if the gods of the demo like me, I will create an access key, access configuration file to it, and we'll deploy an application. We'll deploy something simple from using standard Kubernetes tools such as Helm to deploy from Helm repository from Bitnami. I'm going to deploy WordPress, which is a website from Bitnami. The website utilizes or uses or deploys a WordPress application that's a container. It will deploy also MariaDB, which is my screen database, and it will require a load balancer because we want to have our customers have ingress traffic internally to the cluster. And we'll use also persistent volumes. So you'll see that this will be needed to be created. Yay, cluster is running. Talk about for less than four minutes. So you'll see that we will want to run also persistent volumes. So you'll see what I'm about to show you like shows you how we can configure everything seamlessly. So all of the resources, all of the physical resources, so you guys don't need to. Let me just create an access token here. This is like giving permissions to one of the members of my organization. In this use case, that's me here. I'm going to grant myself, let's call this a demo. And I could associate this with an RBAC group internally to the cluster. And I can create this. As you can see, this creates a standard Kubernetes configuration file, which I could download to my machine here, which I did. If I switch to my command line tool and I could export, and do you config, go to my downloads. And that's my demo. I hope it's this one. So sorry, and do get nodes. If everything works fine, you will see that we have three master nodes in Paris right now and two working nodes as expected, as we expected. Let's deploy our WordPress. This is done from Bitnami. Stable repository. For those of you who are not familiar with this, this is similar to an app store. So people could use a lot of charts over there which are basically the apps that could be deployed on top of Kubernetes. So let me clear this and let's see what we have deployed. If we're looking at pods right now, you'll see there are two pods running right now, WordPress and MySQL. And let's look at services. And you'll see that there is a load balancer, there's a service of a type load balancer that requires ingress traffic in. If you switch to our UI, you will see the cluster is switched into a configuring state. Pretty fast. So we missed it, but it's switched into a running state, but you'll see that there's a load balancer here was created for us. So as you could see, we take care of all of the underlying configuration. So we automatically knew that there's a load balancer requirement from the application that requires those protocols and pods. We also support firewall configuration, health checks, as you could see here. Those are the health checks on the nodes for each port. And we also take care of the public IP. So you could see in this use case, we allocated a public IP. Hopefully, if we do this again, this public IP is propagated here internally and wired up into Kubernetes. If we look at persistent volumes, you could see that we have requested two disks, eight gigabytes, and that should be connected to MariaDB, that's the database, and 10 gigabytes connected to WordPress. Persistent disks or persistent volumes allow the user, if the node goes down or the pod goes down, no worries. Data still persists. So Kubernetes can allocate the new pod on another node and the system could function with no data loss and minimum downtime. If we go to our, and by the way, before I do that, we look at our pods, and we'll have some more information. You'll see that Kubernetes has decided to allocate WordPress on this node and MariaDB, the database, on a different node. And as you could see here, if we're going to go to persistent volumes, our system knows that this was required by Kubernetes and knows how to create those disks, attach those disks into the specific nodes here, as you could see, different nodes, and wire it up internally in Kubernetes. In case a node goes down or a pod goes down and Kubernetes allocates it, we know how to detach it, attach it again. So you guys, as application developers, don't need to do anything other than deploy, as I showed you here, your application. Let's see if our application runs, and as you could see, it is. Let's copy the IP here, and if we go here and browse, we should get our website. Yay. So under five minutes, we created a fully running managed Kubernetes solution, very similar to what you might find on the hyperscalers in a very, very simple way, but we could do it in hundreds and thousands of locations across the region network. So that concludes the first part of Kubernetes. On that note, I'd like to thank you all for joining this webinar. Feel free to contact us at any time. We'd love to open up a trial account for you guys, so you could play around. If there's a specific use case or specific question, feel free and don't hesitate to let us know. I do appreciate you joining this webinar. Thank you very much.