 Thank you all for coming. It's really, really great to finally see people in person when I give a talk, so feel free to make facial expressions based on how much you like or don't like what I say. And today I will be talking about edge computing because everybody does these days one way or the other. And I will also introduce you a really cool open source project that I'm actually the community manager of. So hopefully you will like what I have to say today. When it comes to me, my name is Ildik Ovancha. I work as senior manager of community and ecosystem at the Open Infrastructure Foundation. You can also find my email address on the slide. So if you want to catch me and you don't have time after the talk, then feel free to drop me an email and have a follow up conversation about the topics that I go through today. And as I said, edge computing is the main topic of this talk and I will not go into detailing what edge is. I don't believe in that exercise anymore. However, I would really be interested in figuring out what you all's interest in edge is. Is there anyone here who has any edge production deployments running? Awesome. Who is here who is looking for like a platform and the components to put together your resolution and deploy? Not that many. Okay. So you all just want to learn about what's in this space, what's new, what's exciting. Okay. Sounds great. Then let's dive into it and let me start with the boring telco and 5G use case. So I wanted to bring this up because when it comes to edge computing, the edge part of it really highly depends on what your use case is, what role you have in that use case. You're like delivering one part of it. You're the organization who has their business depending on that use case. So edge really means really different things to different people and different organizations. However, when you take like a high level bird's eye view on edge, you will see that there are a lot of similarities between use cases as well as again what you need and what you're looking for. The challenges that we all need to overcome. So telecommunications and 5G, even though it's not necessarily a straight and edge use case, in some extent it is. It is an industry segment that really is pioneering this space because when it comes to edge, one thing that I believe we probably will all agree on, hopefully fingers crossed, that edge is on the edge of something, which is usually the network. And someone has to provide that network. Someone has to provide that connectivity. So when it comes to the telecom operators, there really is a big pressure on them on one hand to evolve their business and provide new services, functionality, more excitement to their users, as well as to provide the connectivity for everybody else to be able to run their edge use case as well. So this is why this one is so much in the spotlight. And since telecommunications is a highly regulated industry, they also have a lot of strict requirements. So when it comes to anything from real time, latency, bandwidth, they really are under high pressure to deliver on their SLAs and other requirements. And the other thing with telcos is that most of them are running infrastructure on a massive scale. It's also massively geographically distributed scale. So when it comes to challenges like deploying that infrastructure, that is already really, really hard. And that's when the real excitement starts, which means so I have it up and running what now beyond not touching it ever again because it currently works. So the day two operations as they call them is something that is a big challenge for telcos. And as more and more edge use cases are getting rolled out in production, it's not just a telco challenge anymore. Like how do you orchestrate, manage and maintain a massively geographically distributed infrastructure that is not necessarily a solved challenge yet. And when it comes to edge computing and industry segments, what you can also see, again, depending on where you are, that there are a couple of industry segments that are getting more into the digitalized era and also starting to rely highly on cloud, rely highly on cloud concepts and taking those out to the edge, whatever the edge means to them. So we looked into large telecom deployments, the cell towers on the top of the mountain, and at the same time you can look with the same mindset, look at a factory floor, the large machinery and also the small industrial PCs sitting sometimes next to the large machines. They also have highly regulated parts of their operation, human safety is even on the line when it comes to factory floors. So all the real time capabilities have to be there to avoid any accidents to happen. And at the same time, obviously they are trying to make the production as efficient as possible and as automated as possible. So again, they are facing similar challenges than what the telco deployment does, however, the whole look of it is completely different. And another area that I like to bring up is agriculture and aquaculture. I'm co-leading a working group called Opening for Edge Computing Group and we do have two white papers. And this is one of the use cases that we highlighted in our second white paper. And it is really interesting, like how you automate a shrimp farm. It is actually coming from China and how they are using AI and machine learning and how these ponds are turning into digital infrastructure. And again, similar challenges as some of the previous ones in terms of you have to be able to constantly monitor the environment and react if something happens, whether that's environmental circumstances or there's somebody like an intruder breaking into the place and trying to do damage. So for those, you really have to be able to run the workloads efficiently, react in real time to make sure that the animals are safe at all times as well as the humans who are still working there and utilizing same and similar concepts like hardware acceleration and trying to utilize your resources that are available as highly as you can to be able to run the machine learning and the other new algorithms that were not known to this industry segment just a few years ago. So a little bit of summarizing what I just rambled about in the past five minutes or so. One thing that I think will be easy to agree on is that all these systems grow large in their respective area and it always ends up in complexity. These are also usually large and organically growing systems. So again, we always try to eliminate complexity. And in my personal opinion, we really are starting to get to the point where we just all have to accept that complexity is something that will always stay with us. So we need to figure out how to handle it, how to evolve in areas like orchestration, automation and how to just live together with complexity, because we will not be able to run these large scale systems in any simple way. And this is where again, automation will be a big part of the rest of the presentation. And I will also today be focusing on the infrastructure bits and pieces, like how it looks like and how you can start dealing with these kind of challenges that I mentioned, like just a simple thing. There's a small bug in one of the software components that you just deployed on, I don't know, tens of thousands of edge sites. So how do you fix that? You will probably not send out a human to every single site to install a patch from a USB drive. Or if you do, that will be a really long process that will cost a lot of money. So how do we deal with all these things? And the other end goal that I like to always remind people of that you always have architectural choices. And one part that I really like being in an open source environment is that I get to talk to people about what their use case is, what scenario works for them the best. When it comes to the edge computing details, there's always a big conundrum in terms of do I want one central place where I run the whole massive infrastructure from, all the control services are there. And on the edge, I just worry about my workloads and I don't care about anything else. It's much easier to orchestrate and manage. But what happens when I lose connectivity between the central site and the edge, for instance, like will the edge just start operating? Will the workloads still be running? So those are the pain points of that architecture option. Or the more popular one on the right side of you is the one where you have control services running all over the place just to really make sure that the edge has autonomy. And again, the edges are always different. So in some use cases, the centralized way works well. And in other use cases, you want a more decentralized distributed model. So again, never say just edge because it always depends on the context. So the solution part, how does infrastructure software look like that might be able to handle at least a good chunk of the challenges that I just touched on. So in the rest of the presentation, I crammed in a lot of information about the starting X project. I will not go into details about every single part of those features or technical details. There are pointers in the slide for documentation and also where to reach the community because it is an open source project with a really lively open source community around it. So you will be able to find the experts who are working on different parts of the project if this is something that is interesting to you. So what is starting X? Quick question to wake you all up, hopefully. Who heard about starting X before? Who knows? Awesome. A few of you, but not that many. Then I will focus a bit more on the introductory part of the slides. So in a nutshell, starting X is a package that is an integrated open source cloud platform that is fine-tuned and prepared for fulfilling the requirements and challenges of edge and IoT use cases. What it means in practice, you can see on the diagram on the slide that you probably find a lot of components here that you know. Linux operating system and kernel probably don't have to introduce that one to anyone. You can see Kubernetes. You can see OpenStack. You can see a lot of components in the orange boxes like Apeche, SAF, Docker, Calico, KVM, all these kind of open source components that you probably already really familiar with. So what starting X does and what the community does is they are integrating together well-known open source components and adding missing functionality to the mix. So what I was talking about regarding complexity and automation and how you manage infrastructure software on a massive scale, the components with like the purple icons as much as it is visible, meaning the color, they are the services that are designed and developed by the community specifically to address the needs and needy gritties of the edge use cases. So one of the main focus of the project is making it as easy as possible to deploy, manage and orchestrate the infrastructure services that are integrated together. And the other angle of the design and development work is to focus on requirements of edge that are things like security, for instance, focusing on some of the real-time aspects and figuring out, again, how to structure the services within the distributed infrastructure. So I will be reflecting back to those two architectural models in a little bit. The project is already running in production in a couple of large telecom companies. I threw a few examples on the slide like the systems Verizon, Vodafone, KDDI. So if anyone of you would be wondering if this one can run on large scale, then you probably already got your answer because those telecom companies never joke about scale. So how does the platform look like in practice? This is a different view of that sort of architecture diagram that you can see. There is a central cloud which reflects back to my earlier point about the edge is usually on the edge of something because otherwise we would not call it edge. So what is important on this slide is that we all know that edge sites, when it comes to the size and capabilities, can vary a lot. They can be day and night. So sometimes you only have one single server. Sometimes you have multiple servers and want to run something in a high availability scenario. Sometimes you have like a large or medium edge site which usually is often called regional edge or could be called central office if you are in the telco industry and segment. So when it comes to the starting edge platform, what is important is that it can run in a hyper converged mode on a small edge site on one server. You get all storage networking and compute function on one server. It can run on multiple servers in terms of, for instance, implementing some sort of a high availability configuration. And it can also populate obviously the central cloud and the large medium regional edge site. So you don't have to have multiple projects and software components to be able to deploy an edge infrastructure where edge is always heterogeneous, mostly at least in configuration options like how many servers you have available but also in the term of like what is available on that particular server. And the other interesting part of the project is that when it comes to these sites on the previous diagram, I showed that starting X does have open stack and Kubernetes available in it. So when it comes to deploying edge, you might run open stack services on the small edge side, but you might only run containerized workloads with Kubernetes on the single server small edge side. And again, this is part of one single project that can provide you with those deployment options within one deployment. So the choice really is yours and you can fine tune the platform to your use case. What it means in practice, I will not go through on this slide in the details. What the point here is that when starting X integrates those well-known components and well-known APIs, you get the advantage that when you're using the project, you got the interfaces that you probably already know. And since these are all open source components, even if you don't know them yet, you have the access to access the code, access documentation. So it's not just seeing the interface of a black box environment. So starting X does give you the traditional deployment options and components of Kubernetes. You can see that there are multiple options in terms of networking interface, hardware acceleration, and even container runtimes. I would shamelessly just point to Kata here. Kata containers is a really cool container runtime project. Again, open source. And if you don't know that one, I suggest you to look into it, but I will not spend time on that one here today. The open stack deployment, what is interesting in starting X about deploying open stack is that the project deploys the infrastructure services in containers for you. That does not mean that you have to run the virtual machine workload in containers that open stack runs. It really is just the open stack infrastructure services that are running in containers. And why is it good for you? You probably want this type of setup because running the services in containers gives you a lot of flexibility and manageability. So it's much easier to configure the system as well as to again roll out some updates and manage the whole environment. So really the secret sauce is that one of the popular ways of deploying open stack is deploying the services in containers. And that's what you also get with starting X. In the current ongoing 8.0 release cycle, the community is working on integrating flux CD from the CNCF ecosystem. So that will be the next way of deploying the containers and put together the configuration of the system. And we arrived to the distributed cloud architecture functionality, which is one of the kind of the flagship features of the project. And this really goes back to the diagram that I showed that had the multiple nodes in the deployment. And starting X, the community chose to implement a distributed cloud architecture. So they chose the distributed option. And they did that because the use cases that they are preparing the project for most often, they really are highly dependent on providing the ability of having autonomy at the edge. So if you lose connection, like the outside loses connection between the regional or the central cloud, you still want all functionality preferably, but at least most of them still running. So it's not just have the workloads still operating, but also being able to do some user management at the site and spin up a new image or instance of one of the services. So you want to have some control over the system, even though you don't have access to the central cloud anymore. So the community chose that architecture option. And this is how the services and those orchestration functions are also designed within the project. What you also get from this is there's a system controller that runs centrally and you get a single pane of glass view of the system. So there are some more details about that on this slide. But what I really want to focus on here is that you have the access to the whole system from the central site. But at the same time, there is some built in kind of failover scenario in the sense that you are not 100% reliant on that connectivity at all times between the central cloud and edge. And with that, you obviously get the ability of still centrally managed a system like the monitoring, authentication, you get the centralized dashboard. So it is a really nice functionality which really is a conceptual choice firsthand and then come the details of how it is actually implemented. I did start my presentation with telecom environments. So what I wanted to do is to get back to it a little bit and just see what kind of feedback and information the community received since the project got so popular in the telecom space. One thing to highlight is that with the new 5G rollouts and 5G use cases and how 5G sort of reconfigured how a telecom infrastructure looks like, how this aggregated it gets, one of the main feedbacks is that the 5G use cases are highly reliant on containers. So when it comes to the part where I told you that you can have only edge sites with only containerized workloads, that first bullet point really points back to that one. And this is where, again, small footprint comes into the picture. So with that configuration option, you can run services on one single server and making sure that your radio unit is connected to those functions and providing full functionality on really, really small footprint. The other item that I wanted to mention is the size of telco deployments and the scale because we are not talking about tens or hundreds of sites and servers, but we are talking about the range of 50,000. And when it comes to that kind of a range, obviously you have to be able to come up with the configuration options and deployment options that can support that kind of a scale with still giving you the view of how the sites look like, what happens, what happens with the alarms events and being able to manage the infrastructure in a performant and efficient way. So this is one thing that the project supports in terms of spinning up the configuration in a way that is in line with the scale of a deployment like 5G and O-RAN deployments. Looking into, again, automation and orchestration, the project supports and is looking into continuously evolving an area like zero touch provisioning. And that one is obviously really important at the first point here when it comes to like you have to install and deploy those 50,000 deployments. It doesn't just happen overnight. The rollout has to be planned out and to give you an idea of what it means if you want to deploy and roll out those 50,000 sites, that means installing 100 sub-clouds per week. I mean, again, you can send out the person with the USB stick but you probably don't want to. And even if you do, that's not just one person and it's just a massive scale. So automation really is key. It's not my favorite word but it's definitely a word that I will keep repeating more and more, I believe, in the upcoming years. And if you look at the project's website, you will, for instance, see a short demo video about patching on large scale, kind of a push of a button and the system rolls out a patch to edge sites because you might have had a small bug that you had to fix. So it really makes a huge difference on large scale whether you're able to manage something efficiently remotely or not. And just small things, the other one, like certificate management. If you look at, for instance, Kubernetes deployments, this can be something that gives you a lot of headache if your system and architecture is not prepared for that. So when it comes to, again, preparing for these use cases and how to handle the infrastructure, even these small things are on top of the community's mind to make sure that your system will not break down because you have one single certificate somewhere hidden expired because that one is really annoying and sometimes it's really hard to find. So in a bit more, let me see how much time I have left. So just a few words about some of the new things that are coming out in the project. So we are currently on the 7.0 release. It's so fresh and new, still hot. The community finished the release process at the end of last week, so we just announced the release yesterday. So I almost feel like I was on the keynote stage. You are the first people knowing firsthand that the 7.0 release is out. If you're interested in looking into how the project works, I added all the links to where you can find the ISO to deploy the software. You can find release notes, project documentation. There is a lot of information about the project on the World Wide Web, so I really do encourage you to go download it, play with it, and most importantly, I will repeat this at least three more times. This is an open source project in an open source community, so if you like something, come and tell us. If you don't like something, then really make sure that come and tell us so we can fix it or work with you to figure out why you don't like it, why it's not working. The community really wants to know your feedback, and if you want to come and contribute, even better. So key features. There is an ongoing item that I wanted to just throw in here because it's an important piece of information. The community, well, the project has Linux operating system integrated into the platform, and it is currently still mainly CentOS, but the community is moving over to Debian, and there is a Debian version with partial functionality available already to try out, and they are currently working on finalizing the migration over to the Debian operating system. At the same time, I did not talk a lot about that, but they are integrating the 5.10 kernel, I believe currently, from the Yocto project to provide you with that real-time kernel, so when it comes to that part of the project, there is some enhancement there as well. They are integrating the horizon project from OpenStack, that's what the dashboard is based on, and they are doing some enhancements in that area as well to give you even more options to fine-tune and manage your infrastructure, like even the small things as a firmware upgrade, which is not that small when you have to do that on a large scale, and in a heterogeneous environment because when it comes to the edge, I highly doubt that there is any edge environment out there that has every single site equipped with the exact same hardware, or if there is, there might be one of the miracles of the 21st century, but it's definitely not where we are going in the future, so being able to handle the different hardware devices will be, again, more and more important, and those small things add a lot to your complexity problems when it comes to large-scale, and when it comes to scalability, the community is continuously looking into increasing the number of sub-clouds that the platform can efficiently handle. They are also always looking into making just the simple single operations more efficient and being able to run more and more in parallel, which probably sounds trivial, but again, when you come to that 50,000 sites and more kind of scale, nothing is straightforward and trivial anymore. Security and stability, I don't think that I necessarily have to spend a lot of time on this one in the sense that if you're looking into any edge platform, if they don't have any focus on security, then you should probably look into another one, because when it comes to the edge and having all those little sites deployed everywhere in most often areas where you don't really have a lot of supervision on them, having enhanced security is really, really crucial. The project and the community is currently looking into things like security audit, and since I talked a lot about the TACO industry, you can see SNMP support there, you can blame all the TACO operators for that one, but I think that it is still a really, really important piece. You can also see that they are just doing simple updates, like moving over to the POT security admission controller in Kubernetes, which means that they really are putting high focus on staying up to date with the developments in the projects that they are integrating and not developing and designing themselves. This slide, I just want to highlight PTP and I did not do a good job with this one, so PTP is Precision Time Protocol, and in case you don't know that one, it really is all about keeping the clocks in your environment in sync, and it is crucial for 5G, for manufacturing and industrial use cases, and honestly you name it, wherever you need any type of real time functionality, it really does come very, very handy. If you have a use case that has some specific requirements like this one, then you can find functionality in StartingX to cover those. Just a quick outlook to some of the items on the roadmap. I mentioned already the Debian support, so I will not go into that one. Kubernetes enhancements, the custom configuration at runtime that requires a little bit of explanation, so I will spend roughly 30 minutes on that one. What it means is it boils down to the complexity part that I talked about, so the community is trying to figure out how to make it a little bit harder for people to shoot themselves in the foot, so I'm not saying that they are limiting a lot of things, but how you can configure some of the components. It has some limitations, which applies to, for instance, Kubernetes, so in the 8.0 release, you will be able to perform all kinds of configuration changes runtime as opposed to only at deployment time of the platform, which is, on one hand, really nice because you have a lot more flexibility. On the other hand, the options are unlimited when it comes to infrastructure projects like Kubernetes or OpenStack, so any of you who ever try to operate one of these will know that once you get the free handle of doing whatever you want, that's when it gets really dangerous, but you're welcome because, again, more flexibility to you. Hardware acceleration, a list of a few new devices that the community is looking into integrating. There are a couple in there already, so GPUs, FPGAs, and those kind of things you can already run with the platform, and when it comes to Edge, we just can't really stop integrating and supporting more and more of those. And, again, 5G security distributed cloud because I'm running out of time. You will be able to find information about these in the community's resources, so I just really want to highlight that all these areas are in focus, and as for security, you will always find some new security features and enhancements in every single release of the platform. Getting involved, again, open source community. They have a mailing list. They also have weekly meetings. You can even find, at least, me on their Twitter handle if you happen to use it, so any of the communication channels or ways to get in touch with the community, it is up to you which one you choose, and if you would happen to have any difficulties or questions, you can always find me on my email address that is also in the slides. And I wanted to give a little bit of a highlight on where you might be able to find people from the community. One thing that we at the Open Infra Foundation run is a live show on Thursdays. It's called Open Infra Live. That is one of the place where you can find all kinds of updates and information about the Open Infra community and happenings, and that includes updates from the Starting X project. I'm really hoping to have an episode up about the 7.0 release and some of the roadmap items sometime this year. Stay tuned for that one. And if you want to still online and virtually meet contributors of the project, we do have a project teams gathering event. Registration is already live. You can find all the information that you need on the link on this slide. It is a contributor focused event, so what we do there is the communities get together and they talk about all the nitty-gritty details of the project that they are working on, what are the good things, what are the bad things, what are the big things that they need to work on and give high priority to. So if you have any feedback, you would like to meet the people or have any questions, that is a great place to meet many, many of the contributors. This year the event is online in October and I'm hoping that next year we will be able to get together in person again. And roughly 59 seconds for questions if we have any. Yep? Oh wait, I think you need the microphone first. So one thing I was wondering is if maybe the scope of the Starting X project isn't too wide. Like I always thought of OpenStack as something basically unreachable whereas compared, like Kubernetes seems, even though it's a huge project still, it seems a little tighter in scope. And Starting X is basically Kubernetes OpenStack with operators on top with a huge amount of software basically trying to kind of fit in together. Like in different companies, for different use cases, it seems almost like when you reach the end of the infrastructure, it will already change to something else. So that's kind of my concern. I don't know if it's even a question really. It's kind of a trend, I feel like, especially from coming from the OpenStack project. This is how I felt with it. What I would say to that is I think you're 100% right. And it also goes back to the complexity aspect that I mentioned a couple of times. When it comes to Starting X, if you look at it, it is still a component, a platform that is on the infrastructure layer. So the community is not looking into the application space, for instance. They are working also with other communities like ONAP in the TACO space when it comes to orchestration of the TACO environment. So the platform itself is not trying to solve everything. What it tries to help you with though is how you integrate those components together. Because that is one of a big challenge of companies who are just taking Kubernetes and OpenStack themselves and trying to put them together. How do I do that? So I have one complex software here, another complex piece of software here. I have an edge use case and then what? So what Starting X is trying to do is integrate those components together in a way that makes sense for edge deployments. And the purple components that I showed you, those are practically bits and pieces that are not necessarily fully covered in the open source software ecosystem right now. So they are adding those components to prepare the platform for an edge use case. So personally I don't think that it explodes their scope. They are just looking and approaching these infrastructure components and challenges in a different and more integration targeted way. Does that make sense? See that could help with the issue. Definitely. Thanks. We are I think two minutes over. I'm happy to chat with any of you, either of you, all of you once I have the microphone off of me. And thank you all for coming.