 Hi, this is Josepil Bhartia and welcome to another episode of TFI. Let's talk and today we have with us Dan Bartolo Mew Co-founder and CEO of Section Dan is great to have you on the show. Thank you for having me. It's a pleasure Since you're also a co-founder. So I am also curious to know the story of Section why you created it What problems you are looking at solving with a larger, you know, ecosystem. Thank you for having me here I'm really pleased to speak to you about Section Section was born out of a Combination of frustrations that I had and my co-founder Stuart McGraw had as we were building some large scale e-commerce websites What we found was we were relying on hosting companies to provide us with servers to run our software and We also found that We needed to use content delivery networks in order to manage the performance and scalability of our websites and Stuart and I have always been early adopters in agile and continuous integration and continuous delivery platforms and Methodologies and what we found as we were starting Building that e-commerce website was that there was a really big disconnect between What was happening in cloud and what was happening in content delivery networks and we found that clouds were providing a Superior degree of flexibility for us to build and run our software but content delivery networks always seemed rather rigid in their design and What features they had available so Stuart and I after identifying this disconnect between what CDNs were doing and what cloud was doing We set about building a System that would have the Networking properties of a content delivery network in that it's massively distributed However, we also wanted people to be able to choose what software they actually run Inside the points of presence so what we ended up building is a large distributed compute platform that allows people to bring whatever software they like in the form of a of a docker container and Have that run across all of the different pops inside sections network So if I look at section platform What exactly it is because what happened in most cases and I think you folks kind of predates even Kubernetes that these technologies When they start they start off with solving one problem But with the of course the market the with the users, you know Evolution of their own workload the platform itself evolved. So if you look at section today, how would you define the platform today? section today is a Distributed general purpose workload system is probably the most Is the least words I could say to define it But but you're exactly right in that the kind of history as section Started out it really did focus on tackling the CDN style problems in that We started out with Caching content caching capabilities, and then we added web application firewalls we added image optimizers and Bot blockers and a B testing and virtual waiting rooms things that you would normally see inside a CDN But as section was developing each one of those modules the the caching or the web application firewall We built it inside a docker container and that allowed our customers to actually Choose which CDN features they wanted by selecting different containers So instead of a rigid design It's actually a composable design for for each customers CDN needs, but because we were Just using docker containers to do that section in its current form Actually exposes our entire network to our customers as though it was a single Kubernetes cluster So what that means is now I could go and choose sections CDN features and and compose them together But I can also build my own applications and actually deploy them in Intersection and the way I would do that is exactly the same as the way that I would deploy any Application to a single Kubernetes cluster so so under the hood Each one of our points of presence is a unique Kubernetes cluster that we that we run for our customers However, we run a an overlay Basically a virtual Kubernetes cluster that our customers interact with that actually orchestrates the underlying physical Kubernetes clusters that are running across the globe some examples of where people are using us today are Because of our strong origins in e-commerce We have customers that are building no JS type applications or graph QL and react style applications And instead of running those no JS or graph QL components in a hyperscaler They're actually lifting and shifting those containers out of hyperscale environments in and actually running them in section and the way that we facilitate doing that is Because everything in section is now Kubernetes based all we do is we ask our customers to say instead of targeting This single Kubernetes cluster that you have update your tooling to point to sections Kubernetes API and then we'll be able to distribute your container across the network without you making any changes And the reason that they do that is by Generally for user experience and scalability improvements By moving their software closer to users. They're actually able to dramatically impact the page load times of these e-commerce websites, which for those businesses in turn That relates to increased conversions. Now, I will talk about some some Newslines that you folks made at the end of January, of course support for scaling mastodon and also persistent volume storage talk about these two first of all Why specifically mastodon at this point and of course we can talk about the Pristine volume storage as well. Mastodon is readily available in a docker container and it's actually easy for People that want to create their own online communities using mastodon to join section and ask section to run the mastodon docker container for them another Now you would be able to do that on a platform where you were running that On a single data center, you could spin that up in a hyperscaler for example But by doing it in section again, the user experience times Are improved because we're able to run the mastodon server close to where the users are in in fact What section what one of sections strengths is a core technology that we call the adaptive edge engine Which? dynamically moves the docker containers around the network Depending on where the users are and this has a great benefit for for all of our customers because instead of deploying docker containers in One or two or ten data centers and having them always on Depending on when traffic comes That increases the cost now what section does is we continuously monitor where the users are coming from and Then we find places inside the section network That would best suit those users and that means that you don't need to run the containers in locations that are giving a Suboptimal cost benefit ratio Maybe there's not enough users In Europe in the middle of the night to warrant running a container there So section will continuously detect where the users are and move the containers to those locations So that's That core technology the adaptive edge engine is actually really important it it allows Because of the way section builds its network We're actually able to easily make docker containers run across multiple hyperscaler clouds or be Multi data center without any heavy lifting from our end users The second thing that you asked about there was our recent announcement for persistent storage now Given sections origins in performing CDN functions section has for a long time only Supported what we call ephemeral state So that would mean state that the application is able to lose And that we could restart it and that was really important for us to be able to move the containers around the network However, we had a lot of Feedback from customers that they really wanted to be able to have some local storage inside our points of presence so what we've recently Enabled for our customers is Inside each point of presence we make available to each customer's workload a shared disk and That means that all of the containers that we run inside a point of presence have access to this shared disk And it's kind of useful for Containers to be able to share information between each other It can be used as a database backing store so that you could run a distributed database and It is Also useful for just general operations that Kubernetes performs on containers, which is scaling them out So that when you get a new a new container to when you're scaling out it already has access to all the to all the state that the other containers may have created and also in the case of when a container crashes and And Kubernetes automatically restarts the container the container can pick up from where it left off without having to do any state management I also want to talk about something a bit different which is Of course we talked about announcements something which is contemporary as well And that can be a couple of things first of all as we're talking really that you folks predate a lot of these cloudy technologies like Kubernetes Talk a bit about what kind of evolution are you seeing in the space in terms of I know first of all it was all stateless then it became a stateful workloads and then even the reports You know that Kubernetes adoption is growing beyond its you know general Use case so what are you seeing in this space and how is kind of section preparing itself to address some of those use cases? yet What we what we are saying is that We're maybe a couple of different things firstly we're seeing the need for Distributed workloads that are not HTTP based occurring more and more CDNs and especially CDNs that have serverless function capabilities Have been targeting servicing HTTP workloads And but what we are seeing is customers are coming to us saying We we have this piece of special software it doesn't use HTTP it uses a TCP protocol and We really want to get it distributed, but we don't want to build all of the Anycast networks any of the DNS infrastructure and have to run, you know 20 50 Points of presence. We just want to build our container and deploy it and have somebody look after that So we are seeing a drive in non-http protocols Which is something that section is aggressively pursuing at the moment We're But going back to the concept of the adaptive edge engine One of the key motivators for us designing and creating this technology is that as we see the Market for edge computing grow and we get many many more locations What is needed here is a computational approach to deciding where and when to run the software And that is exactly the problem that the adaptive edge engine tackles. So if we take a little example here, let's say a large US ISP Makes available racks of servers across You know a hundred or a thousand locations across the US if I'm a developer and I say hey, I've just built this fantastic container and I want to get it as close to my users as possible I don't want to sit there and say I would like to run in Denver where I am and I'd like to run in in this city and this city and the city and are willing to pay for Four instances of my container in all of these cities what I really want to say is I would like a Programmatic system to move my containers To a place where I might say I would like to be within 10 milliseconds of 90 percent of my users that that is my directive and That is the goal that sections adaptive edge engine is trying to solve In that we don't want to physically Nominate exactly where our containers up We want to we want to state our intent and then have the system manage that for me I mean if you just look at you know the current situation in the market depending on how you look at it a Lot of layoffs are happening companies are looking at even cost the cloud They look at the cost they are looking at cost cutting so talk a bit about what are the trends that you are seeing and How section makes companies more cost efficient market segment that section is really driving into at the moment is Operating the operating distributed compute networks for customers that build Platforms as a service So say for example We see a lot of great API technology emerging at the moment There's a lot of there's a lot of adoption of things like GraphQL in the market and a lot of the a lot of the people that create GraphQL APIs are Looking to improve the end user performance by becoming distributed to overcome some of these speed of flight problems and However, they don't have the operations capability to run a large number of clusters And that comes from making sure that they're healthy making sure that they're patched making sure that there's no security vulnerabilities and doing all of the penetration testing across that network and Another thing that we see is going to your point on the skills base We also see that some of the Advanced networking Capabilities that you need to do to build this kind of system are actually really hard to obtain so what we When we encounter people that are trying to build things like section in house is that they take on both a big innovation burden in that they need to solve a lot of these Problems around networking and cluster management But then also just handling data operations for basic things like Anycast DNS Kubernetes clusters the consistent failures That you have across a large network are really not in their core skill set so by allowing Allowing these API specialists to To work within the areas of which they have fantastic specialization around creating GraphQL APIs and things like that and then providing them a Developer and operations experience that is the same as running a single Kubernetes cluster We're able to give them the benefits of having all of this distribution Without having to do all of that operations operations work, so I Think that that is a really key driver that allows our customers to to innovate to stay ahead of performance That you might see from hyperscaler offerings without taking on the burden of having to train a team obtain that obtain that skill and And keep maintaining that Then thank you so much for taking time out today and of course talk about Section your story and also, you know share your insights on where the market's heading and the ecosystem I love that and I would love to have you back on the show. Thank you Thanks very much