 Добрий день. Добрий день до кубернатів і ристриктів інварійної сесії. Мені сьогодні Олексій Чуніхін. Я сьогодні у компані Кубліру. Ми працюємо з кубернатів, тому що це перший реліз у 2015 році. Кубернатів є платформом, допов'язування інтерпризів, щоб вирішити нову технологію кубернатів. Для того, щоб вирішити немного на титалі з сесії, я хотіво розповідати нашу думку, і те, що відбувається, що ми працюємо, і, відбувається, і, відбувається, як ці нові технології, кубернатів та кубернатів, ми спочаткуємо, скріповуваємо в вирішиті, і, відбувається, і Siber, так, зустрічаємо, як правило, там є больше жону, як, відбувається, і, відбувається, і вирішити нову технологію кубернатів, у формизі аплікаційного директора, ориєційно-микросервісу і АПІ. Звичайно спорежування статистських і статистських компонентів. Також, це відповідь до софтуархітекту. У інфраструкційній сайті важливі характеристи є селфсервісна інфраструкція, ісилація з операторією системно-сервісної депенденції, гіаль-дивопс-процеси, гайлі-атомаїт-процеси і декларатив-ресурс-менеджмент. Також, ми бачимо, що клаунейтівський стек мається більше і більше, більш більш більш інфраструкція і інфраструкція з діжиталного трансформація з діта-сінз, з інтернетів, і в інфраструкціях з інфраструкції від паїти у педі, з інфраструкції на висілоганді і в беді зараз, качати б인데요, і тому, за що клаунейтівські технології створюють такожpi, це, що вони випрацюють продуктивиці експекту, вони не випрацюють на просто інфраструкція. They have a long set of predecessors, starting from SRE, DevOps, 12-factor apps, even virtualization may be considered a step in that evolution where we disconnected from trying to isolate us from complexities of infrastructure management and make people who develop and deploy software more and more productive. І insomniately they empower IT teams to deliver quickly, reliably, predictably, and seemingly enterprises and larger enterprises must benefit most from that. від цього, але, в реалі, адопція на клауна натіфт-технологісті виглядає, якщо ми не вважаємо, як вейлз, як гугл і майкрософт, хто фокусується на АТ. Так, що це? Тому, що в інварі, в інтерпризах, виглядає кількості характеристи, що на першому сайті, може, навіть контрадіктори на те, що клауна натіфт-технологісті виглядає. Так, це частина території в цій презентіціі. Так, що виглядає рестріктів? Так, це виглядає кількості лімітей, учасні рятні ділянки, нескольку гівного ділянки, технологій девіти, інфраструктурних ділянки, і виглядає говерність, які lotionують першу цю ціль. Як ми взагалі діглянемо до ці виглядають рятні ділянки, Так, найбільше, що ми бачимо, і що відбувається з клауд-натівних технологістів, це, ну, перштово, багато комплексних інвірунтів, це, що, що, що, часто каже хайбрид, коли ви маєте багато клаудів, коли ви маєте інші дита-центрів, і, в той же час, ви маєте якісь інші тим, що потрібно працювати з клауд-сервісів, чи потрібно виконувати еластичні комп'ютні ресурси, наприклад. Один з сильних інтерпрацьих реквірунтів є, звичайно, центральність, і мініфізація, менеджмент, і говернення, інфраструктури, і софтвіри, і пакети. Більше це реквірунтів, чи реквірунтів для інтеграції, з легасою, і, чи, ні, не з легасою, але вже існутих комп'єнтів, що не може відбуватись існутих для інтеграції з клауд-натівних комп'єнтів. Звичайно, це важлива частина на інші селі, софтвірунтів, ціфичні реквірунти і полісти для паціону, для пеки, діпліуми, іміни, ціфірунти. Від втіляти ціфірунти, рівні чині та констренції, ціфірунти, ціфірунти, ціфірунти, рівні чині, рівні чині, рівні регулаціон, рівні сети, х Warrior Council meant. Ms That often exists and also difficult to change overnight when you have different teams and departments that are responsible for mr. structure management for software management and for software development, security, legal. So all that makes it more difficult to use and implement cloud-native architectures in larger enterprises and add to that complexity because even when a operations or infrastructure operations team ready to start implementing Kubernetes or cloud-native on their enterprise. They see this and some of you probably saw this diagram. It's NCF cloud-native landscape. And deciding which components of this LEGO set should go and eventually comprise your architecture, which by the way you don't yet even understand how should look like and how should be set for future development. So it makes, creates sort of pretty high barrier for entry. So considering that, so we thought about what is maybe not most productive, some of the productive ways of thinking about Kubernetes in this environment. So whether it is just a container orchestration, which you use just for those teams who need container management, is it as some people say just a step in the evolution from mainframes 30 years ago to serverless maybe 10 years from now. Is it just a microservices platform? So what we found a productive way of thinking about Kubernetes when you're planning a cloud-native architecture rollout is to think about it as an infrastructure and cloud abstraction and platform. And I'll talk about that a little bit. So considering all those requirements, we just barely scratched over a few several slides. You may think eventually of a Kubernetes container orchestration architecture for an enterprise looking like that. So essentially you have a container runtime and container orchestration layer represented here. It may be just plain cryo and Kubernetes open source or it may be vendor managed cloud provided Kubernetes. You think about that as a layer for container management orchestration on top of various infrastructures that you may have under your control. So if it's data center and Amazon, you can use the same Kubernetes and the same container runtime on top of both and reduce problems when you're migrating or failing over applications between those. So in addition to this foundational layer, you cannot leave without certain operational harness and security and governance harness components that enable use of those technologies of delivering that platform to different teams inside of your organization in a controllable, governance, secure, safe manner. At least that includes infrastructure and operations automation components. You most probably will need API to enable a certain level of self-service for your teams because it's not usually just enough to deploy a couple of clusters for this team and that team. Like in most cases development flow includes experimentation and trying this and that framework on top of Kubernetes and setting up Kubernetes cluster. It's difficult when this process is not completely automated when development team cannot do it for themselves. Logging and monitoring, of course, is an important part of that operational harness that should cover not only infrastructure and container orchestration level, but also level of auxiliary services, the platform itself and, of course, application layer, level of observability for higher level application of frameworks, like if you are using micro services frameworks, just logging is usually not enough or just collecting logs and just collecting metrics is not enough. You need traceability. That's where Yeager, Zipkin and frameworks like that come into play that let you trace essentially trees of calls between your micro services. And a number of components in the security and governance area that include various aspects of security, role-based access control, transport layers, level security, certificate management, image and binary scanning, et cetera, et cetera. But this is not all because if you think about Kubernetes, there is a number of auxiliary services that you cannot leave without when you are deploying Kubernetes and that includes storage, cloud native storage usually because without that you can only manage meaningfully stateless services. In most cases when you are talking about services and micro services you also need to ensure some way of accessing those services from the outside world. This means ingress and API management layer, networking, which includes, well, on one hand it's intercluster networking. That's usually what Kubernetes providers provide by themselves, but there is also intercluster and intersystem networking ability to isolate or open systems to each other. That's where frameworks like Taigera, for example, play a big role. That's what happens on the side of sort of managed services, let's call it, or Kubernetes services, because those are not yet your business-specific applications and services. There are other auxiliary components that need for those to exist. A little bit higher level components that enable you delivering and building those services include service mesh frameworks, serverless frameworks that can also run on top of Kubernetes, various repositories and registries, CI CD, pipeline, components, and application lifecycle management. Only after that you are sort of well-set for actually releasing your probably first production application and leaving it in the wild. So where is that infrastructure slash cloud abstraction and platform is? So it's here. So think about that as applications running on top of Kubernetes, and you immediately see that these guys give you sort of your pocket cloud, your abstraction that unifies various infrastructure providers and delivers compute and storage and other infrastructure-level resources in a standard, uniform way under common operational and governance platform. And these guys that you run on top of your infrastructure-abstruction platform converted into the real cloud with real managed services. Think about, for example, here if you need storage that works across your data center and cloud, you can use cloud-native frameworks like Portworx or CF and Druk. If you think about fully open source alternatives, your CI CD pipeline may be based on Jenkins and Nexus and run again on top of Kubernetes in your data center, or you can migrate it into Azure while your real applications will run in AWS. Or some team may prefer Jenkins X or Circle CI. But if you think about that as a separable component that may be a managed service or maybe a hosted service but hosted on top of this infrastructure-abstruction platform, you make your life easier on one hand and you make your architecture more future-ready, in a sense. So thinking about that, for now I probably will switch to overview of some solutions that we came to over recent couple of years when we essentially built that architecture based on open-source components and the idea that this universal architecture will cover need of your general enterprise. And I'll start with this Kubernetes management essentially component. So because one of the requirements that I mentioned when talking about enterprise requirements and challenges is centralization and unification of your, essentially, of management of your assets. And as soon as you start providing Kubernetes as a service inside of your organization and again this follows from the fact that usually your infrastructure management team gets tasked with this in your average enterprise. So you need some level of, essentially, visibility of common single pane of glass where you see all your clusters where you can manage your clusters and that means also API, sort of certain operations unifications and infrastructure management which may include cost controls, et cetera, et cetera. So that may be covered to some degree by frameworks that are being developed under Kubernetes umbrella such as cluster API, for example, C Group, might be a large progress in this area, unification of cloud provider management or ability to uniformly manage various infrastructure elements with different infrastructure providers using same API and same central registry of your clusters and infrastructure. That component usually needs to include log collection and monitoring and not just disparate Prometeuses and elastic searches deployed into each and every cluster because that's rarely enough for even average scale cluster Kubernetes platform deployment. So you need some level of centralization of log collection and monitoring metrics collection and analytics. So identity and access management integration components because again, rarely Kubernetes is the only component that requires some level of centralization for security and access management and you need to integrate it with existing enterprise identity management systems. And components like binary repositories, image repositories, image management components that often include security scanning and for images and image management and the proval workflow. So what we found working well for different components like that an architecture with identity broker essentially when you have a registry of your clusters and you have a benefit of setting them up uniformly in the same way. So Kubernetes by itself supports integration with OIDC identity providers. But what's beneficial is to have an identity broker which essentially is dedicated to Kubernetes AirBug management. So you can use that to manage access control in sort of uniform way applying standard policies for different clusters under your management and then you can integrate it with your enterprise identity management for authentication, for example, for group and role management, etc. So what we found works well in that role is key clock, for example. It's a wild fly based identity management software supports OIDC, SAML, LDAP, Active Directory, so you can be integrated with pretty much any existing enterprise identity management solution and on the other hand, it can provide OIDC and points for all your cluster fleet in a standard uniform way. Binary repositories and image management so there are also a number of solutions out there, managed services, SAS services, just Aqua Security first that comes into mind but also there are open source and partially open source components like well, Artifactory of course, Nexus, Sonotype Nexus, they all have now componentry and APIs to integrate image management and security workflow into your built and development pipeline. Another important part of the approach is to be ready to deploy clusters, let's call it from scratch because again, often it's enough to rely on managed Kubernetes like AKS from AWS or AKS from Azure but again in the real life in enterprises having a data center and not getting rid of it is not unheard of or happens more often than not for various reasons for policy for security isolation reasons and for framework being able to deploy clusters in environments like that on bare metal, on virtual machines even in isolated environments where it doesn't have access to Internet or have limited access to Internet is a must especially if you're building a more or less future proof solution. And yeah, so this here I usually talk about certain characteristics of the clusters that you deploy by yourself. We don't have too much time today so I won't go into too much details but the idea is that clusters that are deployed by that control center need to still stay independent in terms of their recovery from that control center. So if it goes down, the cluster needs to be able to exist by itself needs to be able to recover by itself so for that what we found a good solution is distributed operations logic that usually is implemented by operations agent that is tasked with preparing instances for Kubernetes making sure that they connect into a single Kubernetes cluster in a secure way usually for that to happen they require a certain level of interconnection orchestration so they can either talk directly or more often use an architecture with some intermediate orchestration store so essentially a secure store which can be used to share secret skis for example and some operations оперational information Running most of the Kubernetes components as containers also usually helps and it became more or less standard in different Kubernetes products over last year because it helps with portability clearly helps support multiple different operating system etc. So everything else looks pretty much self-explanatory except for two components that I mentioned in that general architecture slide with control center related to monitoring and log collection So I said that we need to be able to centralize that and in most cases it's easiest and most reliable when you just use a managed service something like Splunk or if you rely on AWS it's AWS logs etc. But if you are in a heterogeneous environment of for some reason cannot rely or share your information with SaaS services etc. You may have to essentially maintain those components by yourself and that's where you may go into this uncharted territory of building a centralized log collection and monitoring solution or maintaining it for yourself So I just wanted to highlight the differences from standard monitoring and log collection as you see them usually in Kubernetes clusters. So normally when we are talking about monitoring for example we think about Prometheus running in the same cluster at monitors talking to Kubernetes API collecting metrics from various metrics and points and visualizing those with Grafana running in the same Kubernetes cluster So when we are talking about centralized monitoring the situation when we are talking about monitoring condition of every cluster first of all it's wasteful because Prometheus is a hungry guy he can use like 16 GB of RAM if the cluster is any meaningful size a lot of disk space etc and secondly you don't get your cross cluster cross environment analytics So one of the approaches that we ended up using in certain situation where it's required is centralizing metrics collection using Kubernetes proxy API So we essentially run a lighter weight Prometheus with very short retention period just to collect metrics in those clusters we manage and that saves a lot of resources clearly and then we federate metrics from that Prometheus through Kubernetes API setting up essentially TCP tunel using Kubernetes proxy API capabilities so that gives this level of centralization and cross environment cross cluster analytics and similar approach is used with elastic search with lock collection so when we are talking about single cluster lock collection so it's usually again ELK stack running in that cluster which may become harmful again if you are running multiple clusters and similar approach can be used to collect logs under this Kubernetes management platform with tuneling log flow log stream through Kubernetes proxy API So we used Rebitmq but with the same success for example Redisqs may be used on the Kubernetes sort of children Kubernetes cluster site so while in this control plane we are running full blown elastic search Kibana stack to provide access to those logs log analytics So again going back to that diagram from which we started so that gives you this cradle your pocket cloud infrastructure abstraction and platform where your infrastructure is represented by Kubernetes clusters and it is ready to accept and provide you with higher level managed services or let you mix and match because don't make a mistake, I'm not arguing against managed services, if you can use managed services like EFS for example in AWS it's definitely preferable way than setting up your own portworks cluster but sometimes you just cannot do that or you can do that in some environments but cannot do it in another So if you have this reliable infrastructure abstraction and platform layer you are ready to set up managed services layer as well with much much more ease essentially than if you planned it from scratch and running on just plain infrastructure because a lot of operational and security governance concerns already taken of taken care of by this infrastructure abstraction layer So that covers it so I hope I delivered the idea ready for Q&A