 Ladies and gentlemen, good afternoon and welcome to First Asia Summit 2024. First of all, on behalf of organizing committee, I would like to extend our sincere regard and thanks to all of you for taking valuable time with us today. And to begin the program, I would like to invite on stage, I would like to invite on stage, Ms. Arin Cheung-Jae-Lin, startup software consultant, National University of Singapore. Let's give her a round of applause to welcome her. Thank you. Okay, would you like to please join us in the front? Thank you. All right. Okay, so today's talk, I'll be talking about agile practices with scrum. It's actually a workshop. So we'll try to make these more hands-on as possible that I can within half an hour. But I'll try to deliver most of the information still within half an hour. Well, the important information. So today's agenda, first, we were talking about theory and first principles about agile, what agile is actually about and why we want to practice agile. Okay. And of course, the scrum framework itself, which most of us actually associate. Firstly, when we talk about agile is scrum. And the people's and the teams that makes up agile practices itself. And then of course, our knowledge creation costs. We are here right now in the knowledge economy. So we want to know how agile works in terms of knowledge creation. Okay. So first, let's look at the theories and principle. What is agile really? Literally, we're talking about fast iterative approach. Right. So if you want a job in a nutshell, just a simple sentence. That's it. Fast iterative approach. As long as you have something that needs to be done, any project that needs to be done, you do it in an agile manner. Basically, you do it in a loop, do it again and again, you learn from it. You get the feedback. You learn from it and you put it back into motion. Right. So this is what agile is all about, literally. Next, let's look at agile manifesto. So back in 2001, right? 2001, 17 white guys, they came together in a ski resort in Minnesota, in Utah, sorry, in Utah. And basically they came together to write this agile manifesto. Let's say, okay, this is what agile manifesto is. So these 17 white guys, most of them are leaders in the software field. Right. Some are from various mathematical field as well. So mathematical professors, writers of books, conference speakers, software managers, all these people. They came together and decide what is agile, right? So they got these four values of agile, in the agile manifesto. Individuals and interactions over processes and tools. Not that processes and tools are not important. Processes and tools are very important. We need them, right? In this technology world, we want to have processes and tools, but we need to first and foremost think about the individuals and the interactions between you and I, right? Individuals and interactions first, before we look at the processes and tools, if we can't get individual's interactions right, processes and tools, they don't stand much. So first the individual's interactions, then only you look at processes and tools. Look at the working software over documentation. Documentation is important. We are not disagreeing with that, but your documentation means nothing if your software is not working, right? So working software first, then documentation. Customer collaboration over contract negotiation. Back in the days when we have waterfall methodology, right? You deal with waterfall methodology, we talk about contract negotiation, we talk maybe one month, come out with contract, write out all the details, and then okay, you pay this much, I deliver in nine months. No, we want customer collaboration over that nine months period, right? We want the customer's input throughout the whole project. So we want customer collaboration first, before contract negotiation. We also want to have some sort of agreement, but we also want to have customer collaboration, right? Responding to change over following a plan. The whole key here is responding to change. A job methodology is about responding, adaption, right? So responding and change is more important than following a plan. You have a plan, yes, but that plan, if it needs to be changed, change the plan, okay? We want to change. A plan is good, but allow for change in your plans. Okay, next, let's look at the complexity of product development. So usually that's how it looks like, right? If you have close to certainty and close to agreement, it's a very simple project. Then a waterfall methodology is more than enough, right? Then if you have far from agreement and far from certainty, nothing is sure, then you have anarchy. Basically everything is just messy. That's just one word mess. There's nothing else. But in between, right? This is where most of our projects lie, right? You have some sort of agreement. You have some sort of certainty. And whether it's complicated or complex, this is where agile comes into play. Agile is best to deal with these sort of projects. Okay? So what is scrum? What is scrum? The name scrum, where does it come from? Who plays rugby here? Anybody? Oh, you have a rugby player. So you do scrum in your rugby teams? You do scrum in your rugby teams, right? Yeah. So scrum in rugby, basically they try to get the ball. This is a scrum group, right? They come together with your team, the whole team, the whole mess of people. They come together and that's what their goal is. Their goal is to get the ball slowly by trying to get the ball slowly closer to the goal. Literally that's what scrum methodology is about. Everybody have the same goal. We have the same software, right? That ball, bring that ball closer to the goal. Okay, so this is scrum. And what's within each sprint, right? These circular iterations, what we call each iteration is a sprint, let's say, right? For software development, that's what it is. For the software development life cycle, right? We have plan, then we, after planning is what? Define. Then execute. Then, right? Once you implement, you always have to test it, right? And then review. Very important in agile technology is that you have to review everything that you do. Have that review improved. So basically what we are talking about is each iteration. We are doing in each iteration exactly the whole entire software development life cycle in each iteration. You repeat the whole software development life cycle again and again and again. You plan, execute, test review, plan, execute, test review, so on and so forth. Okay? Alright, so scrum is actually an empirical approach. What is empirical? Right? When we talk about empirical, we're talking about historical data. So when you collect a couple of data together, then it becomes data to be able to work on, to be able to act on, right? So that's what empirical means. You actually base your actions based on data, based on actual observation and experience rather than anything else. Don't do it because other people say so. Don't do it because there's actually data to prove. Okay? So that's what empirical means. So empirical process itself, it actually requires courage. Why? You must have trust and courage before you can have transparency. Transparency is because you have that transparency, then only you have that information for everybody to share. Right? Be transparent about your own failures. Be transparent about your work. Be transparent about the outcome. Be transparent about your values. All this. You must have trust and courage between your team members, between the organization before you can have transparency involved. Right? Once you have transparency, then only you can have inspection. Then only you can look at it. You have inspection, you have the data, you inspect it, and then you add that to the situation, add that to the data that you have. Right? So that's why you see this progression. You have trust and courage transfers. Then only you have transparency in your organization. Then only you can work on that data to have inspection and adaptation. Then only... Right? Thank you. Then only you reach the goal. Okay. So scrum values is very important. Why is values important? Values is important to that individual, each of us. Each of us, our values are important to us. As a whole, values is important to the organization because once you have the values clear cut, then only everybody in your organization knows that, okay, this is what we are working towards. So everybody has the same goal. Everybody has the same direction. That's why values are important. That's why scrum also, these values are important to get to the goal, to be able to achieve what you're trying to do. Okay. So let's have a look at the scrum framework now. So this is a... This is an important aspect of scrum framework, is the time boxing. Time boxing is important. You want to have a particular time set for every single event in scrum. So if you set, say, okay, so this meeting will be for one hour. Make sure that that's called time boxing. Say you want to time box a meeting to one hour. It will never ever reach outside one hour. So it's more than an hour. Okay, this is too long. Let's cut short. I'm going to read another session or saying, okay, this meeting has been dragging too long. It's not achieving anything much anymore. It's digressing. All this is what time boxing is trying to fix. It's trying to avoid. So let's say, okay, everybody has heard of stand-ups. Lovely. Okay. So this is why I would like everybody from the back to come all the way to the front. Please join us in the front. There's a lot of anti-spaces. Please join us at the front. Let's do this very quick exercise. This is a workshop, so I'll need you guys to do a little bit quick stand-up. Hence on exercise. So everybody very quickly give your background who you are, what you did in the last 24 hours, what you plan to do in the next 24 hours. Okay. So I'll get the timer right here for us. Okay. And we'll start. So I'll give our start first. This is Erin Chong. And last 24 hours, I did a city tour of Hanoi. And next 24 hours, I plan to enjoy the rest of West Asia, lovely sessions today as well as tomorrow. How about you? Can you ask again? Sorry. Okay. Okay. Hello, everyone. My full name is Do Chiung Zhang. I work as a scrum master in a company about transformation of project. It means project of Vietnam. This morning I have some meeting with my client in company. So I really excited about Asia as scrum that you will present today. So I come there and I really want, I have some problem and some question to ask you about real situation that I face in my company. So I hope that you can help me to resolve them. And after this event, I really want to expand my network to have more community. And especially if you have time in Vietnam and you want to hang out with you or have some cafe to discuss about Asia. Thank you so much. Hi. My name is Peng. I'm computer science at PTIT. At 24 hours, I spend a weekend at home. And next 24 hours, I'm going to attend Fourth Asia Summit. Hello, everyone. I'm a final year student of Phenica University. And I'm working on an NLP engineer and also with a tendency to, you know, gain knowledge about this field. And I want to participate in the workshop to apply scrum and write in personal. We'll check not enterprise project, you know. That's all. Thank you. Hi, everyone. My name is Quan. I'm currently a senior student. In IT field. So in the last 24 hours, I studied at home. And the next 24 hours, I will spend my time in Fourth Asia Summit 2024 and attend every workshop that I can. Thank you. Hello, everyone. I'm Hugh and I'm developer at the website field. So last 24 hours, I'm hanging out with my friend and play badminton. So the next 24 hours, I will be able to review some tasks from my members. Hi, Ration here. I'm currently a full stack software engineer with Outsource Digital based in Singapore. In the past 24 hours, I flew in on a VHF flight to Hanoi from Singapore. Next 24 hours, I will continue to attend the rest of Fourth Asia Summit. Thank you. Hi, my name is Nikhil Kathore. I'm from India. Yesterday I explored Danang. From there, I took a flight to Hanoi last night. And today I'm attending Fourth Asia and preparing for my talk. And in the evening, I'm planning to explore the Hanoi. Good afternoon, everyone. I'm Winning Win. I'm Vietnamese, but just come back from Singapore. Last 24 hours, I was enjoying my time with my family. And now I'm here to learn more about the dash-gram. Thank you. Hello, everyone. I'm Wichitong. I'm a software engineer from Vietnam Group. And the last 24 hours, I had a trip to my hometown with my family. And the last 24 hours, I will attend the second day of this conference. Thank you. Hello, everyone. I'm Thao. I'm from Vietnam. Last 24 hours, I spent time at home. And the last 24 hours, I come to Fourth Asia to submit. And I will have a talk session after your session. And in Vietnamese also. Hi, everyone. I'm Win. I work together with Thao. Last 24 hours, I spent a lot of time for my talks. But we will have a session after you. And the next 24 hours, I think I will spend more for myself. All right. Thank you very much. Okay. And basically look at that. What we did was just in six minutes, right? It does not take a lot of time to do a stand-up. Okay. So next, let me introduce what is a scrum team, right? A scrum team literally makes up of a development team of less than 10 people. So no more than 10 people. Like just now we talked about who we are. Last 24 hours and next 24 hours, we did a scrum team of 10 people. That's how long it takes. Six minutes, that's all. Okay. So why a development team? In scrum, a development team is basically a team that manages itself. The manager does not give you any work. It does not assign you jobs. The team itself will assign themselves their work and their job. That's what we mean by managers itself. And the team itself creates the done increment. We'll talk a little bit more about the done increment later. Okay. Each scrum team has a product owner and one product owner only. What does the product owner do? The product owner owns the product backlog. It basically takes care of what goes into the product backlog. And his job is to optimize the value of the product to be able to tell, okay, from the product backlog, what is the priority? What should be done first, right? And each scrum team can only have one scrum master, no more than one scrum master. So that scrum master basically only manages the scrum process itself, right? He also removes impediments. So traditionally in the traditional mindset of software development, the software manager is the optimum person to be the scrum master. So in this case, if you are a software manager of a team, you would say, okay, let me take on the scrum master role. You manage yourself. I take care of any impediments. Basically, if you have any issues, I take care of it for you. You do the work. You manage yourself. I take care of any issues from the outside. Okay. So let's look at scrum itself. Okay. We have the scrum events and the scrum artifacts. So what's the scrum? It takes, okay. So it takes up the sprint. The sprint is the key of the scrum itself. So one sprint is from one week, no less than one week sprint, no more than four weeks sprint we are talking about, right? So no more than one month. Every time you have a sprint, if it's more than one month is no longer scrum already, right? So each sprint is no less than a week, no more than four weeks. And before you start your sprints, you have your product backlog. Remember, the product owner is the one that comes up with the product backlog. And the product backlog is basically is organized in terms of priority. So the top stories at the top are the highest priority stories. And then you have less and less priority. So the product owner will come in and say, okay, this is the highest priority. I will move it up to the top of the product backlog. And of course there is a product goal. Every time there is a product backlog, there is a product goal. And then we will take the top stories from the product backlog. We will put it into the sprint backlog. So the sprint backlog is basically all the tasks required to complete the sprint, whether it's one week or four weeks. Of course, once you have the sprint backlog, you can very easily determine what is the sprint goal from the sprint backlog. Then at the start of the sprint, what we do is that we have a sprint planning, no less than one. Actually, I wouldn't say no less than our, but the time box for the sprint planning is no more than one hour for a one week sprint, no more than eight hours for a four week sprint. Daily scrum, what we did just now, our stand up, right? We stand up and we talked about what we did in the last 24 hours, what we plan to do in the next 24 hours, any impediments, basically any blockers that you are facing for you to complete your work. So these are the three things that each team member will stand up and talk about. It's literally a space where we update each other on our progress. What are we doing? We are working every single day for no more than 15 minutes. Just now we went around of 10 people and only took us six minutes. So you should not take more than 15 minutes for the daily scrum. So for a team, you can decide when you want to do the daily scrum, whether you want to do it in the morning, when everybody says, okay, everybody is in already, let's just do our scrum, our daily scrum. Stand up and then just tell each other what you did in the last 24 hours, next 24 hours, any blockers, that's it. If you want to do it in the end of the day, five o'clock, everybody, okay, let's just stand up, do our daily scrum also, it depends on the team itself. Okay, so the increment is basically any phase of the working software. Every single iteration of the working software will be called an increment. It does not have to be at the end of the spring, it could be in the middle of the spring or anything. Basically the complete working software, that's what we call an increment. So the last increment that you have in the spring, we will bring it into our spring review. So the spring review is done at the end of the spring, of course, and it takes no more than one hour for a one week spring, no more than four hours for a four week spring, okay? And after the spring review, okay, the spring review is usually where sometimes you want to bring in the customers. So the customers come in and then, okay, let's look at this spring, let's review what is correct, what I think that needs some tweaks, needs some changing, feedback from the customer, all that is done during the spring review. After the spring review, then the development team comes together and we do a spring retrospective. A spring retrospective should last no more than one hour for a one week spring, no more than three hours for a four week spring, okay? So during the retrospective, what happens? So the team, okay, now we have done the work. We have already demoed to the customer, okay? Now come down and then what? Then we have to do a review. What do we do in a review? We talk about, okay, how was this spring? What was good about this spring, right? What was bad about this spring that we need to change? And what should we try next? What have we learned about from this spring? All that is done during the spring preview, okay? And we also have this, what we call this traffic light, basically start, stop, continue. You reverse it, you can get what was good, you want to continue doing, what was bad, you want to stop doing, and what was, what did we learn that we want to try next is start, okay? Okay, then after the retrospective, we have this definition of done. What is the definition of done? Definition of done is basically an agreement between the development team to say that, okay, when we complete a task, it needs to meet this criteria before we can say it's done. That's why we call it a definition of done. We define what we consider done. Let's put it on paper, clear card, if we take all these boxes, it's done. So that's the definition of done. After the spring review, sometimes you want to think, we decided, you know, we want to add something to the definition done, or we think that, or something is not necessary, we want to remove from the definition done. Usually we do it during the spring retrospective. Okay, so this is your typical process of a spring, and this is a typical process of scrum. Any questions, we will leave it to the last, okay? We will try to continue as much as I can within five an hour. Okay, so a sprint is actually an agreement between who and who, anybody can say, anybody? It's actually an agreement between the development team and the clients, the customer in this case. So basically the development team is saying that every sprint, you, the customer, you can have us do everything you want, everything you, anything that you want us to do, you can tell us, every sprint will let you say whatever you want, right? Tell us what you want. And basically this is the promise to the customer. Okay, we allow you the flexibility of your own product. And this is also an agreement from the customer to say to the development team, say, we will leave you alone to do the work. We trust you to be able to complete the work that you said we want. We told you that we want already, okay? So this is the client's promise to the development team of stability, say that we'll let you do whatever it is for that sprint. We won't interfere with you. After the sprint, we can tell you something new that we want, okay? So this is what it means by a sprint is an agreement. Okay, we probably have only enough time to talk a little bit about what people means in a agile methodology. When you talk about people, what motivates people? Anybody? What motivates people? What motivates you? Money. Recognition, as great rewards. Actually, what motivates people, interestingly, is actually autonomy. What's autonomy? Autonomy means the ability to do what I want. That's autonomy, right? That's motivation. If I can do what I want, I want to do it. Everybody also feel the same. The ability to organize my own work. This is what actually motivates people. Mastery of my own work to become better at my work. To do better, to be improving of my work. To have the purpose, to make a contribution to the work itself. These are the three key items of what motivates people. Okay, so to be able to shift from the rate zone behavior to the green zone behavior, you want to be listening and responding instead of listening but not receiving. You are talking, talking, talking, you don't receive. You don't want that, okay? This is bad behavior. You don't want to respond defensively. You want to be calm. You want to be casual speaking. You do not want to feel wrong. Don't feel defensive, right? Don't feel wrong. Have an open behavior. Open about your faults. Open about your mistakes. Open to be changing. Don't get involved in betters and antagonism. You want to welcome feedback, right? Feedback is the only way we can learn. We want that feedback. Don't have an imminent perspective. Mutual success is important and to have collaborative excellence. We want to work together to get to quality, to get to excellence. Okay, so that's people. And next, let's have a quick look at teams. What does teams means in agile? Agile, you hear this very often. It's all about continuous. You want to have sustaining pace, okay? So continuous innovation, continuous integration, continuous development, continuous improvement, continuous pace, sustainable pace. Sustainable pace means that you want to have that continuity. You want to have it at a right pace. You don't want it to be too heavy pace. Okay, then everybody just fall. Okay, too tired already. I cannot continue. No, you want to have it continually. So it has to be a sustainable pace. Okay? Always have cross-functional team. Everybody understand about cross-functional team and T-breath team. Have this in mind when you develop your team. And we don't want a complete harmony. Complete harmony means it's not actually improving. Everybody agree with each other. That's it. We also do not want to have wall. We do not want to have contest or fortification in the team. What we do want is actually productive descent. We want you, okay, yes. If you have anything that is opposite thinking, tell me about it. We want to improve on that. We want to see each other's different, diverse opinions perspective. So this is a conflict response mode. You have economy modulating at one end. If you have a lot of avoidance, that's where you are. If you have a lot of assertiveness, as well as a lot of cooperation would be at the highest green end, which is engage collaborating. We want to have that. Okay. I don't think we have time for knowledge creation. So I'll skip that. Thank you very much. If there's anything... This is just a quick look at the different slides that I won't have time for. But thank you very much. Please give me your feedback. Thank you. Okay. Hello, everybody. I'll start with... I'm Nguyen Van Thao. And this is Tran Minh Huy. We came from a group, a community called the Co-Op. I see some foreign in here, but my session is Vietnamese, but you have any questions? Can you feed to us? Yeah, thank you. Yeah. We came from a community called the Co-Op. This is a community. And we usually share experiences, as well as share values about open-door business. We have a web page called Co-Op. And we share a lot of stories about open-door business. We really love to learn about open-door business. And we really hope to share with you because we came here today to share values with you all. It's not new, but it's interesting how we do it, how we promote our business, and how we create a new world of open-door business. So, we will share four parts of our presentation. The issues we have to deal with in the process of creating open-door business. The methods we can learn, how to automate that, and ultimately, a small demo for the audience. Let's get started. As Tha just introduced, our main job is to manage open-door business, or infrastructure. For an open-door business like us, we manage all open-door business of our company, from the hardware like Router Suite, or server sharing, or products for storage. For an open-door business like us, when a system is built-in, and it's maintained at the most, we need to have the tools to measure information. In the past, we used traditional tools. According to our own method, we used the XL to measure that information. However, we didn't have much information, so we had to look at the devices. The way we manage the information using the XL will start to happen. For example, when there's too much information, the update of the information will only happen in a day. It has lost a lot of power and time. Not because of the human love, but because of that, it may lack information. Besides, the most important thing is to keep the information of the devices or for higher leaders. It's also a problem that we need to be aware of. That's the problem that we have to deal with. We have found out about other open-source products that can be used for normal management. We have chosen to use the mk and netmob tools. I would like to introduce the mk tools to be an open-source tool used to measure information. For us, this is a good product because it covers all our devices. The mk tools on a Nazi product have been around for quite a long time. However, it has gained quite a lot of skills and there is also a very good deal that can be used for people with disabilities. The second tool is the netmob. This netmob is an open-source tool developed by Digital Auction. You can understand that the netmob is like a product, like a source to collect information from the devices. As for business, information from the devices can be information about the model, the devices, the business, the IP, or the machine. All of these information can be collected on this netmob. Another skill that can be used on this netmob can be used for the devices here. We use this ZUONO feature. You can understand that it is used to measure information. In Japan, like in Japan, it can be used when a server is created. When it is used too much, when there is a server in the device, it is crashed. Those are the things that we recognize. When a manager is like us, after we look at these things, we can understand whether the output is much or not. We will use other methods to prevent it from happening in the future. In addition, for the team leaders, when they see it, they can recognize whether it is stable or not. For this skill, we developed another thing, a history wrap. To keep it here, we can look more closely at the system. When we transfer them, they can see it much easier. When we use this tool, we can understand it for a long time. However, because these are two different tools, they are not connected to each other. We continue to understand how we want to complete the project. When a VM or a device is created, there are some things that will happen to them. We decided to follow Eventriven for the product we have built. We continue to understand how it works. We realized that in our management, a lot of events were created, for example, when a device is damaged, it is an event. In those events, before, we didn't use it and it didn't bring any value to the project. We decided to accept that event in our management. The Ansible Rubik is a new product of Ansible. It helps the entire operation process. It will connect the events. That is, the events that we create in the system will be connected to Ansible Rubik. Then it will have those events that we want. After that, it will click on an action to perform the action that we want for the event that we can collaborate with. As you can see, the events can come from many places. It can be a trusted system or an additional device. After the event is born, when it comes to Ansible Rubik, Ansible Rubik will perform the filter. It will click on a Playbook. Ansible Playbook is quite familiar with the majority of management in the system, because it is a basic tool. We know that we will define a basic tool. What does it need? After it runs, it will be good. Of course, the trigger is the start. It will still be the start. The start is still the start. When we use Ansible Rubik here, we will automatically activate the Playbook trigger. As Thao said, Ansible Rubik itself will bring a different architecture. It will be more about the Event Drive architecture. This is a new architecture. You may have seen the MicroService architecture, but this is a new architecture. In Vietnam, everyone has already started using it. Thanks to Ansible Rubik as a general tool, we have used many of our functions to be fully automated. However, until today, we will share the first and we will use it. As you can see, when a user in his or her unit wants to create a computer, first of all, the network will collect information of the user. From the design systems, then the network will send an event, an event to Ansible. Ansible will then create a payload of that event's information and decide to perform the following actions. After the event is done, Ansible will call the designer to create a computer. After that, it will automatically add the product to the network to download. Finally, it will send the information to the user. Before that, the process of creating a computer and pressing the computer to upload the product to the network to download is that we have to manually share the first part. Thanks to Ansible and the tools we have used, we have been able to complete this process. A second skill to use the event is that a server has been added to the MCA system. When there is a problem that can be solved with that server, it will be an engine that we have just shared. After that, it will update on this network to send the information to some some simple things. For example, if the server is broken or the server is crashed, Ansible will be able to start the server again or start the server again. After we have used this process, we have had quite a lot of time to build and increase the management effect. We also reduced the amount of time when the user or the manager forgets to update the software and the management of the server. Because we have worked with the open source software. As you can see, we have been able to inform Ansible or Update on the network. It will be used by the companies, by the systems that we need to work with. For open source, we can participate in the process to make it easier. However, there will be a problem because the open source allows us to understand the research. To make it easier, I would like to give a small demo on how to update from an event on the network. I have prepared a video from the beginning. This is a VM on our system. The VM is running. The VM is online. We will add it to the CheckMK to check the information about the CPU. This one has a previous rule. When we create it, it will automatically add CheckMK to the network. It will also add to the network because we will use the Juno to manage it. For example, when the VM is turned off, it needs to be rewritten on our system. When the VM is turned off, CheckMK will realize that the VM is turned off and it will send an event to the network. The network will say that the VM is turned off and it will send an event to the network. The VM will be turned off and the information about the VM is turned off will be sent to the network. CheckMK will realize that the service of its agent is stopped will be sent to the network. One more time it will realize that the VM is turned off and it will send an event to the network. If we want to config the service to send an event or to host an event to the network, it will send an event to the network. It will send an event to the network. It will send an event to the network. After sending an event it will turn off the VM and it will see that the VM is turned off and it will send an event to the network. CheckMK will realize that the VM is turned off and it will send an event to the network. CheckMK will realize that the VM is sent to the network. CheckMK will realize that the VM is turned off and it will send an event to the network. CheckMK will realize that the VM the demo part of mine, we will also share the use key that we use. If you have any questions, you can ask us. We have 25 minutes, right? We have 20 minutes, maybe 5 more minutes. If you don't have any questions, I will finish sharing my use key with you. Thank you. Software visualization is a very broad topic that covers a lot of the SDLC, but given today's time limits, we'll be talking about a few specific aspects of it that would help us. So let's just look primarily only at code-based visualization. So why software visualization? A study showed that developers, we spend more than 25% of our time just navigating through a code base, whether it is good code that's highly modular structured or as code bases eventually tend to become spaghetti code. So why do we do this? We navigate through this code to answer a lot of our questions. These are just a few of this, like from understanding what the code base does to identifying specific data flows, function flows, or even like what are the model dependencies and a lot more. And when we do all this, we keep a lot of this information in our head these are called mental models, but surprisingly, not all of us have the same mental models because software is very hard to visualize to a real-world metaphor. Although throughout the SDLC, there are many places where we have diagrams and visualizations like the most common thing that many people would be familiar with are UML diagrams during the design phase, where we have diagrams such as the class diagram, sequence diagrams and use case diagrams that enable you to explain the structure of a software to people. And then when you move on to testing deployment and monitoring, you again have a lot of visualizations that enable you to show how software is working, its behavior. But when you think about the implementation aspect, the aspect where we spend a lot of our time in, we don't really have a lot of visualizations. Now, software visualization is not a new concept. The goal of software visualization is to identify mental models and metaphors that can help translate software to a physical metaphor that people can easily understand. Why are we doing this? Because that is how we can communicate with each other about the software that we develop. So this domain has been there for more than two decades and that book particularly was written two decades ago and that indexes every single effort that was done by various researchers over time to describe how a software evolves. This becomes very necessary when you're thinking about scale of a software. Like when you build a project that's initially small, you actually put in a lot of effort to structure it and then design it. But then as the code bases grow, which naturally happens in enterprise settings projects, millions of lines of code, how do you communicate that? That is where software visualization can help. Now software visualization is broadly categorized into three areas. One is a software structure. The other is a software behavior and then the final thing is software evolution. Software structure represents the static structure of the software. And this is where most of the design diagrams that we currently use are also fall under. Software behavior is kind of hard. It depends on the technology stack that you use. There are certain technology stacks or domains that can directly translate to good visualizations for the behavior. For example, robotics. You have a lot of things like behavior trees and simulations that can show you how your software works. But what about an enterprise resource management system or a warehouse management system or an order booking system? It is hard to visualize the behavior of these things. The final thing is software evolution. Here, this is an interesting topic because as your code bases are not static, it's not done in a day. This evolved over time and as things evolve, a lot of people work on the software and a lot of things happen and multiple parallel changes keep happening in your software. There, you can analyze all of this data. We'll get to some cool visualizations there and you can analyze a lot about dependency tracking, change impact analysis, and you can make decisions about whether to refactor my code, optimize my code, and performance evaluation, etc. So with that overview, that is software visualization about structure, behavior, and evolution, let's get into the challenges and the benefits of software visualization. So many of you, as I keep talking about UML diagrams, many of you, depending on the language stack that you use, you may already be familiar with a way to auto-generate your code, diagrams from your code, such as if you use Java and Eclipse or IntelliJ IDEA or if you use tools like Star UML, which is what's represented here, you can actually draw out these UML diagrams and then that can get generated into stub code from which you can start working. And some of these extensions also generate these diagrams directly from your code base automatically. But they have their limitations. Now, this is a very simple system that has like five, six classes. Now think of a five million line code base, when you auto-generate a UML diagram, that's just going to be a lot of classes and you can't actually, you'll have to spend a lot of time filtering through it and then making it, getting it to a place where you can make some sense out of your software system. So then researchers at that point in time represent software, especially very large code bases. That is a cityscape. So for this, people, since software does not directly translate to a physical metaphor, people have been trying to identify a lot of metaphors. One such metaphor is a cityscape. The city is a very old visualization, but the idea behind this is every single building essentially stands for a class or a function, and then a collection of buildings could be a class and then an island could be a module, and then your lines that connect each of these islands are your dependencies and imports. Now these kinds of visualizations have their benefit. You can actually see how big your code base is, but they have their challenges as well, like I'll talk about. Imagine, again, a 5 million line code base that you generate the cityscape for. How do I read this? If I don't have any idea whatsoever of what I'm looking at, I'm just looking at a diagram, a very complex diagram. That's not solved the use case, but despite that, these cityscapes have stuck around for a very long time. This is an open source project that you can check out right now. It's explorevis.dev. This works on Java code bases, and here again, you see a cityscape. This is a pet clinic sample project that's on Java, so this is how it's visualized. As you can see each of these individual modules and then the classes and how they are connected to each other, how they are imported, all of this are now thrown into mixed reality, but the idea is still being the same, and this would also have the same problems when you go to a large escape. There have been other ideas on how we can better represent our code bases. This is a project, this is a shutdown right now, but the idea behind this project was can we have a better way to enable people to understand our code? Can we logically group our code in ways that can better translate to our mental models? So here again, the code is analyzed, and then it's categorized into different, execution points, like the entry point, then where initialization happens, then where the functions are defined, you can actually logically navigate this code base. This of course is better, but then again the same problems of your language, many of these tools are language specific. They can only work on Java, they can only work on C++ or they can only work on Csharp, and then there's a lot of work that goes into explaining this code to you. This is a static code analysis and not actually use any sort of logical understanding of what is happening. So because of which, you are limited by visualizing your structure. Let's see how there are other tools in the other ecosystem, like these are some Python examples, like that will enable you to extract your dependency graphs, and on the top there you see Wistrazer. Wistrazer is an extension that it can install that will actually show you how your function is or your class is executing and then you can see how long it takes and you can navigate your code with that way. This helps you in optimizing your code. And at the below, this is actually a research effort that took place a while back wherein when we're looking at code bases for optimizations and refactoring, would it be good if researchers or like the developers actually got that code. They can actually tell them exactly how much time a particular line of code is taking. Now, again, this did not actually materialize into a usable open source product, but researchers are always working on these kinds of things. When we come to evolution, as I mentioned earlier, we have all this metadata that is around the code. It's actually very interesting that when you think about it, it's not just the code, but your version control history of the project management data, your metrics, can we actually visualize that? And when we visualize that, can we infer any meaning? Can we understand things about how our code base works and how our team works? Here is another open source project. It's called Gores. This actually which takes your metadata around your code and then starts visualizing it as a bloom diagram and it shows you how over time how many people were working on what projects or what parts of the module and then it shows you over time how your project is evolving and then who is working on what. Now, this is exported as a cool little movie to use, but can you actually get more data-driven insights from these videos? As you can see as the timestamp on the top keeps moving out the project kind of stabilized after a while and then the users are only working on certain modules and improving it, keeping it up to date. You can actually get this information right out of your code base, metadata. Now, this is another project called Codescene. This is not completely free. There are certain aspects that are free, but it's also paid. The idea here is those insights that you just saw in the video, they can actually be visualized as usable metrics. So from code hotspots to architecture, to team dynamics, to system performance, all of this information extracted directly from your version control metadata. So what are code hotspots of people who are not familiar with code hotspots are the regions of the code where a lot of activity is happening. This is important when you want to actually identify the areas where you need a lot of refactoring or optimization. Basically, the rule of thumb is that if a lot of people are working on one single file, then that file needs to be split across and to make modular. So to make this design decisions, such visualizations become essential. This talk, there's actually a talk specifically about this and how what are the insights can be derived from your metadata. It's called a code as a crime scene. It's also a book. It's an interesting way to look at code. But despite all of this, if you look at our stack today, we don't actually use a lot of visualizations. We just open up our IDs and we start writing code. Why is that? Like I mentioned earlier, a lot of the time, it's about not having an intuitive representation that you can act immediately to your ideas too. There's our learning curve with any of these visualizations. You have to spend a lot of time understanding how these visualizations help us and some other time, these visualizations are not automatic. Like let's say you're using open telemetry and then you'll have to instrument every part of your code if you want to visualize your optimization and performance. Then maybe these things are not easily integratable into your existing workflows. As I mentioned earlier, they are very, very stack focused. Like some things work only for Java, other things work only for C sharp and other things work only for C++ and sometimes, but more importantly, it's all this information overload. There's no way for any particular person, when you're looking at very large code bases, they're overwhelmed by all the information that they get and they lack the context that's required because you don't need to see all this information every single time. There are maybe one day you're trying to understand your code base, but other days you're just trying to find out how a particular function works. But this again is an open problem. It's not solved, but can we actually make a more focused, intuitive and stack agnostic visualizations that can enable us to understand our code base is better? Right now with the evolution of generative AI and called better code understanding these things could become possible. There are three key things that need to be solved for us to generate usable visualizations. The first aspect is code understanding. How do you understand large volumes of code? The second thing is what are these metaphors and mental models? Or if people have mental models to understand their code bases, can we have a visualization engine that can actually represent those unique visualizations for each people? That's context-based, personalized representations. And then you finally need the technology to enable that. AI agents today like GitHub co-pilot or any other code understanding tools do work for small lines of code. Maybe you can take a 2000 line code and then ask GitHub co-pilot to generate a sequence diagram for me and it will actually work for you. Two diagrams right now we have a lot of these languages such as mermaid.js and draw.io. These are all descriptive languages like latex if you're familiar with that wherein you can describe a diagram because of which it translates very well to the large language model use case because then it's essentially just doing text to text. It is trying to understand one form of text and then describing it in another form and actually try this out today. You can take any small code base and then ask co-pilot to draw a diagram for you. But then as these code bases scale even co-pilot or any other large language model for that matter cannot out of the box describe that diagram in great detail. They become very abstract. So if I'm trying to explain a simple order management flow, if it's a very small system I'll actually get descriptions of the exact process. Then I'll actually say order inputs, order processing, order validation. That does not actually give you details about the exact implementation but only a high level overview of what the code base is doing. And they may or may not be correct because hallucinations and correctness and repeatability of large language models are a problem. But you can work around this too. There are certain approaches but since the technology itself is very new all of these research activities are currently in progress. There's a lot that is happening in software visualization research today. So from understanding what our mental models are, how do each of us think about the code base and how do all of us keep all of this information about a particular code base in our heads while we can't really describe it because it is easy for a person who has been working on a particular code base to explain everything about it to you. And while this person is translating all this thought from his head to English or whatever language, there's a lot that's happening from converting your mental model to understandable standardized models. Then there's work on trying to understand exactly the cognitive load, the learnability, the emotions that surround people when they are navigating these code bases and how can visualizations deal with the increasing trend of this complexity. So as I said, these visualizations currently don't scale to very large code bases. And this last work, can LLM-based agent workflows be used to understand code and visualize code better? That is actually an open problem that even our team is working on. We pro lab45 research. We work on a lot of these upcoming research areas from software visualization to robotics and other domains that involve a lot of generative AI. This is our work and we are making progress but software visualization is a domain still actually can benefit from a lot of your inputs today and if any of you find any of this interesting, you can always yes. Okay, good. Yeah, so if you all find this interesting, then there's a lot of ways you can contribute like from going to finding out all these open source projects that I mentioned earlier and you can start contributing today to that or you can actually get in touch with the researchers that are working on this. Like us or there's a small website at the end that's a software with dot wordpress.com. So there it's a community-driven project wherein people and researchers are indexing every single effort that is happening in this space as much as possible. But very soon we could actually end up in a place where we are not just coding with text in our IDs maybe we are programming visually and we can describe any kind of a project, a system using visual data and then navigate from visualizations to code directly because all that could become possible soon. So with that I'd like to conclude my talk for today. Thank you. No. Yeah. Okay. Yeah. Sure. Definitely. Thank you. Any other questions? Okay. Thank you. But yeah, my name is Nikhil Kathore. I am working as a principal software quality engineer at Red Hat and I have been working since last three years on testing software as a service platform for Red Hat. So today we are going to discuss about the continuous delivery and testing in software as a service. So before that let's take a quick look on today's agenda. First we will look into like what is software as a service for those who are not aware about it. There is a quick introduction about we will look into the one of the strategy that we are showing off here and as a sub component of it we will look into development technology where the services are built or developed. We will also look into their deployment pipeline how those are getting deployed and we will also look into some of the testing challenges and how the testing is done. So software as a service I think everyone here already heard about what is open source. We all know open source dominates the software ecosystem right. And with time and with time we can increase from the demand like cloud is also gaining the popularity. So we can say like this software open source software are moving to the cloud platform in terms of some delivery model and that delivery model which is nothing we can call it as a software as a service. So basically software as a service is nothing but is a distribution model in which different cloud providers and applications and whoever wants to use it as a user they just go ahead and subscribe it to it and they can basically start using it. If you look into the architecture traditionally we had many monolithic architectures but going with the time and with the new cloud native services we are moving to the micro services and this involves like front end as a service and then as a different service and also the database as a different service. So this is the integration where all this come together and we get in user experience out of everything. And this is again one of the reason where this is becoming a popular choice amongst the users because they just need to go to cloud provider opt in, subscribe to it and they start getting whatever the requirement of their business they start getting to use to it. One of the challenge like they had like managing the infrastructure of the deployment that is gone with this option right. So but in SAS like there are two things which matters the most. The first is speed and the second is accuracy. So when we are talking about the speed of this new features coming up on their experience very fast they don't want to wait for it since the world is changing AI is coming up and they want the things pretty faster. So that's why one of the reason like if we have to build our software in weak and safe way we need to follow some strategy behind to achieve that. So basically don't sacrifice the software quality for speed. As I said like this model is getting evolved and this is getting adopted fastly. Thus we also need to modify our monolithic testing strategy as well. I have here the three pieces of the guidance which we have divided into the first is software development life cycle where the services are built. The first is delivery pipeline like how these services are deployed in different environments before it's going to production. And the last is operational monitoring. So once you deployed any service on the production environment then that's not where you stop. You also need to monitor and you also need to make sure your users are getting correct experience. Each of these different systems we are monitoring this each step and we are learning out of that and we keep improving out of that. So let's take a so here are some things about the development methodology. I'm not going to into the each and every point but the most important one and the most applicable which has not in our traditional model we are going to this three quick points where we think like this are some of the important things in case of developing the services. So the first is quick cycle of small code changes. As I told speed is critical in this SAS model. So customer or the user need these features very quickly. So there is basically one of the days where customers are looking for or the users are looking for one of the release and they're waiting for a month and quarters and years. Customer wants these features very fastly and they want it very quickly. So this short cycle of the commits that help the features and the fixes to go into the prod early so that users can use it and they see value out of it. So this also gives basically automation engineers a time to develop the automation. Whenever developer develops something and that goes to the production your stakeholders or quality engineers they can go ahead and write the automation for a particular function. And going forward one of the important thing we are going to discuss is automation is key to the deployment pipeline. If you have the automation in place you can release your software pretty quickly. So that's why like small code changes rather than pushing the big whole bunch of the code once in a release we are talking about here like we should commit each commit to the prod or we are talking about here we should do the small changes in prod once in a time. And this could be either daily or either this could be weekly or this could be even the multiple times in a day. The next is short live branches. So if someone is working on if as a software developer you are working on a feature rather than having it in a separate long live branch you start developing it and this could be feature flagged in the production environment where a certain set of users can use particular those services and test it we also call this as the beta or the preview environment. Also the architecture this is the important stuff here because here we are talking about the integration where one service depends on your another service. So if you are developing different services micro services then you always have to be on the latest code of your dependencies so that it won't broke into the production. So make sure you are always on the latest code of your dependencies before going to the production. So these are the some things which I wanted to talk about the point is like the code only is helpful to user when it's in production. The next strategy point is the deployment pipeline. So as I talk earlier like this is the cloud era and this is very much faster. So each step in the CICD pipeline has to be automated. It has to be have some kind of metrics and some kind of getting for that. So here we are talking about all systems should be monitored by some mattresses here. So when we talk about the different steps those are like development build test deploy and finally the release. So if you see the diagram here as a developer developer goes and creates a PR that is full request then as a stakeholder of QV or the quality engineers you go ahead and add a test around that as well on the PR by deploying the particular change on the ephemeral environment and this ephemeral environments are nothing but like the short live environment which you can use as a temporary environment to test your changes to automate around that. So whenever the PR gets merged you already have the automation running environment. So here the code change as soon as it gets merged it gets deployed to the further environment here we are calling it as a stage environment where again the certain test like the regression test functional test integration test whatever you have those are getting monitored and if you see the there is getting mechanism involved in every promotion of the code from either your PR to the production you have the getting mechanism at all the places. So if you see like what if something goes wrong you also need to be make sure that if something goes wrong you can roll back or you can revert the change within a few cycles of your changes. So that's why the matrices that we are talking about those are critical and you have to monitor your systems in all the environments to catch the issues proactively. So that's the reason the operational monitoring is also important here. So when we are talking about the operational monitoring we have different things to take care of here like for the first one alerts and the notifications when something goes wrong you make sure the right stakeholders are getting alerted before it becomes a problematic for your users. The next is the dashboards here again like we talk about the different visualizations mechanisms using different tools like you have a number of open source like the open telemetry Kibana and even the couple of the monitoring tools like Splunk. Those can help you to create the alerts and dashboards so that you can monitor your users or you can monitor your application before it becomes problematic for your users. Third is the logging it's also important like it is possible that your functionality is working fine but your server is getting something wrong or it's throwing a number of errors or it's getting out of memory. So you have to monitor your whole system starting with your different matrices like CPU what's its memory what's the network input output throughput so everything so that's why like logging is also the way to monitor the aspect. And the last one is the user matrix how many times I think there are a number of applications where a particular user user service needs to scale as per the different occasions. Let's take an example where if Christmas comes in and shopping to be software services which are related to the shopping and everything gets the number of users those that gets high in that particular time and in such cases your service can grow. So you also have to monitor that user matrices so that you you get ready for that you can scale the service as you want and you can basically make sure the customer experience it. So this all the things that we divided into the piece of the guidance and finally some of the testing challenges that we are going to discuss here as I talk about like this releases happens so frequently couple of times multiple times in a day right so here as a QE you need to make sure you are selecting the right test pieces for the right promotion of the code. Here also we have to make sure like less testing of the software elements but more demand of the software testing. That means your test cases has to be selected but you have to run it frequently so that if something goes wrong you get the risk you get notified on the risk of the particular software and you can basically work on that quickly. The next challenge is how we always be ready for release. Since we are talking about a particular change that can go to production and you have to make sure nothing is at risk due to the particular change then you need to make sure like what we are testing and how much time it is taking so that's why it is like the continuous testing isn't as simple to implement but it adds additional value to your business. The next thing is progressive delivery or the feature flag so if you have heard about the progressive delivery that is nothing but you enable your service on the production for a certain number of users and you start measuring your skill ability and the performance of your application so that you can measure like how the service is doing using these progressive delivery feature and also the feature flagging where you are enabling the service for particular users and you are getting their feedback from them so this fast we check out of them and you are making the changes as per the requirements in their services. The next is testing in production this could be sound weird but yeah this is testing in production you have to make sure like this can multiple times be happening to you services are working fine in the pre-production environment but in production it may break due to the dependencies or due to the so the point is you also have to make sure your services are working in production and you have to continuously like you are running some set of test cases in production as well we call it as a shift right testing as a QV it is also important that you need to have the understanding of services and the data flow so why we are talking about here like services and the data flow it challenges with the software as a service like is one of the challenges in migration of the data so if you have implemented some change in the database so your previous data should be intact those should not get changed because you deploy something new so database migrations and the understanding of your services this is really important as a quality engineer to understand and finally testers expertise in customer experience one of the big aspect of this is in monolithic or in traditional software you get access to the logs you get access to the resources you get access to the application where it is deployed but in SAS you just get the front end as a user so user may see the functionality is working fine but user may also how to see like what is happening in the back end so that understanding is important in terms of the customer experience so it is also important like whoever is looking into your services as a quality you have to make sure the customer experience is also good so I think this the whole point is continuous testing for SAS is very important and also the you have to make sure you involve the QB or the quality assurance engineers early in the cycle either in the planning phase or either in the environment phase where they get the understanding of what you are delivering so I think that's pretty much out of it any questions or anything if you would like to ask yes please so there are multiple things involved here the first one is the functional test cases that sometimes we go ahead and hit the APS and we are getting the response out of that the second time is like let's say there are some test cases which are making sure the back end functionality is working fine and the last is the monitoring where we set the different Prometheus and alert manager and the SPLUM using which we get to know like if something is going wrong so that kind of stuff comes into the production testing yes so it depends on future to future so like sometimes as you say it could be the A.B. testing or it could be the feature flag or it could be the progressive delivery yep please yeah yeah we do have one example where we use which do the real time monitoring and we generally check the data on hourly basis if something goes wrong in the last hour we get to know at the end of the hour and we take the actions accordingly yeah okay so if anyone has any questions you can also discuss after the session thank you best way possible always I love that so my name is Thomas Nabrava I work for CodeThink and I am a free software advocate and I work for free software for 20 years I joined at KDE on 2004 really really long time ago I am 40 and currently I'm working with Codevis large scale software analysis I'm also a developer for Bloomberg for KDE and for Arch Linux because apparently I have no better things to do than to code and I have no pointer so I will always come here when the software architecture is wrong there is no way for you to fix that in the future or when there is it is too costly thousands and millions and billions of dollars working trying to get something that is horrible to something that will be maintainable in the future and the result will always be subpar as soon as you are starting a company if you do not pay attention to software design your company will not scale and that's an important thing to pay attention to because open source software, closed source software does not matter if you started your software without taking care of the architecture you will not get far but it costs a lot also if anyone wants to interrupt me please do I have no problem being interrupted constantly and I know that sometimes my English is really fast so just poke me and I will repeat slower so what the hell it is it is a lot of things at the same time it is based on the book large scale C++ it is also a software that will be applied to way more languages than C++ currently we support more but we plan to have most of the good languages of the languages that are being currently used in the world inside of our toolkit we have specific codes already like ISP Java and C-Sharp not fully working that's the reason that we are not integrating with it yet and it already works it has already helped fix these issues in LLVM in the Qt foundation and the KED repositories each one of those things here have more than 2 million lines of codes at least at LLVM has at least 5 million and 400 software. And we were able to ingest all of that and fix issues. And it's also being used by Bloomberg. Bloomberg is sponsoring this project. So I need to say, be honest with you that a large company is sponsoring my work, but it is also being useful for them. This is CodeVis. And this here is also CodeVis. This is CodeVis analyzing CodeVis itself because I need to use what I produce to make sure that the thing that I'm doing will help people. And what you see here is each compartmentalized library from different aspects of CodeVis and disposition matters. Things that are in the bottom do not depend on anything. Things that are directly above it depends on the things that are in the bottom and subsequently. So this thing here can depend on everything. This thing here depends on nothing. So as soon as you look at this, you already know which is the most important thing, the foundation, and then you'll know how your libraries interact. But you see, the code that I just told you that I did, it's also a mess. It is a lot of information here. And just by looking at this, you have no idea, you have no clue on what to do. It's the problem that we were discussing about it is just too much data to analyze for a single human person. But let's advance. So we have analyzers. We make sure that we are ingesting largest-scale code bases from day one. From day one, we were ingesting LLVM, Qt, and the KED libraries. And those, in a sum, is more than 10 million lines of code, so we always try to make sure that our software was ready for production. Not good, not perfect, but ready to be used in real software. The visualization framework in UI is decoupled from the algorithms that those analyzers, so you can run the analyzers anywhere and then you visualize the results. And a plugin system, because I might be a genius, I am not, but I might be a genius and I might think that I know better than you everything that you need to use. And then I might forget things. There is maybe something in your projects, in your company that you wanna visualize in a different way that I did not take care of. So we have a plugin system to fix those things. We support, on the language level, Plang, which we will ingest C, C++. And we also support Flang, which is the Fortran compiler of LLVM, that we ingest Fortran. But why am I on 2024 looking for Fortran codes? Isn't Fortran dead? No, Fortran is still used inside of supercomputers and no company will ingest, no large-scale company, no company that actually has data centers that run supercomputers will use my software if my software does not use Fortran. So Fortran is also there. And Fortran can interact with C and C++ as soon as you parse your projects, you will see those interactions. Everything that we did on codevis, we found errors in Qt, we found errors in LLVM. Every time that we found an error, we fix that error in the library and we send all of the contributions back. So we are helping upstream because this is extremely important. Open source can buy if there is just users and never contributors. So for you people here, seeing open source for the first day or using open source on a daily basis, help open source. Why code think might be useful? I have no idea. I am a researcher. I am trying to understand why codevis might be useful for companies. But we found security issues just by analyzing graphical interactions within modules. The XZ attempt Linux hack this week could easily be spotted by codevis because it was trying to use something that was not on the original file. So if you download it, try to compile, analyze the results with codevis, you would see a cyclic dependency there that should not be there. So this could be useful for security audits. It could be useful for re-engineering automation, applying AI on engineering and then changing codes, regenerating the graph and seeing if things should be like that. Developing a morning process, which is something that we already discussed, and visual templates for code and technical documents. What else? We have CI integration. We have three CLI applications that can run on your CI or in a distributed system that will generate the information that you need. And then you can deploy the results in database on AI to feed a larger dataset model, for instance. If you do not have a CI and you just want to use your desktop application, also useful, you just click in the Generate Project that will bring you this table here that you fill the information that will then generate things. And it's far fast on your computer because it will use as many cores as possible, as many cores as you have. It can also be used at distributed. If you use a local database, so you are not stuck on a server, you can distribute your database or you can use that on a server. And then we will be using for visualization and data transformation. And this is the easiest version of the previous graph. This is easier to understand to what I showed before, but it's actually the same thing because here I did not explode, I did not show the entire catalog in once. So ignore the names that I'm using for libraries. I am obliged by contract to use those names. True, but it's quite easy to see here that I have a problem. I have a problem because one of my lines is pointing up. Because one of my lines is pointing up, that means that I have a connection from here to project, project to loader, loader to something. I have no idea what this is. And then this is a cyclic dependency. This is hard to solve. This is hard to fix. And this is an architectural issue on my application that I added their own purpose to verify that my thing was correct, generating the correct graph. As soon as you find this kind of thing, you can say, this is wrong. We need to stop what we are doing. We need to fix this before the software gets too complex. But we have no idea where it is because this is folders. So we need to expand the folders to see the files. And then you see file to file, which is the file that it's creating the dependency. And then you expand the file and you see the exact methods or exact functions that are generating the dependency. And then you split. Then it's easy. The problem is to find those things. Here we have everything from every single library that your software depends. Independing of computer language. So if you compiled something in C that used C++ that used Fortran, everything, every library will be here with every single thing that the library has. So you're not just analyzing your code, you're analyzing the entire data set. The main view is drag-and-dropable. It's dynamic. One thing that you also said on your talk that it's hard to have dynamic graphs, dynamic capabilities on those because tools, they currently work by generating one static graph. And then you need to look at that and infer all of the information that you have. I hated that. I hated GraphVis because of that. Anyone here already worked with GVPR? Nope, you are lucky. GVPR is an aberration created by GraphVis in the 70s and 80s, which is a command line tool to filter graphs. It's a complete programming language to filter graphs, which is undocumented, horrible to use, full of bugs, and the only thing that companies had until, I don't know, five years to filter large data sets. So now we have a dynamic view that can load, unload, multiple tabs, split views. You can create, you can remove, you can analyze, and you can add documentation directly on this to showcase to your colleagues. Navigation, expanding, this kind of thing. We have also an information box. As soon as you move your mouse, we can add more information dynamically to that box. This is also plugin-based. So you see that I'm not actually producing a lot of code. As soon as I have the data, I need to analyze it. And analyzing is one of the most complex processes that there is. I can also search because as soon as I have one million nodes here, it's impossible to find just by looking at it. So I type what I need and I quickly jump. And the possibilities. We found an architectural issue. We found a cycle. I found something, but I have no idea what. We relay those two plugins because my software will never be able to implement everything that your software might need. So the plugin system can add new things on the UI and can add new things on the canvas. The canvas is the drawing part of the graph. It can also look directly into the database and it's done on Python or C++ because Python is useful if you wanna distribute and C++ is useful if you wanna use internal on your company. You do not need to open the source and you need to make it fast because Python is really good for prototyping but as soon as you need the speeds, as soon as you are going on a multi-million graph, Python will not go. And it's simple and direct, simple. We ship with example plugins. We have a plugin that will look for code coverage that will analyze your Git repository and display on the view how much of that file, how much of that folder is covered by unit tests. So as soon as you drag, drop on the view, you'll see 80%, 90% directly. It also has a Cclick detection. So you, oh, I actually have an example of Cclick detection, so yeah. And we have a LaCosian rules. LaCosian rules is based on the largest case C++ book. I don't think that anyone here care about that but some companies do. So we wrote that. This is the Cclick detection plugin. We drop something on the view. Right click, detect cycles. It will look for your largest case stuff and then it will highlight the cycles. Click on the cycle. You'll see here what class, what file is having the cycle. So it's easy for you to go there, fix manually. We do not solve the cycles for you yet but we try to. And then advance. And then you need to fix and regenerate your graph to make sure that the Cclick relationship is gone. More cycles, more cycles, cycles. I just said everything here but if you wanna take a look because my English is not perfect. So the Cclick analyzer just analyzes visually the elements loaded, important loaded, not the entire data sets. Because if it is the entire data set it might take too long. And sometimes you'll know more or less where the Cclick is but you are not sure. And it will find the cycles. You click, you analyze and you make your software more maintainable so that people in the future will not try to kill you. And this is again, these plugins in study. So we are studying right now on creating an AI architecture analyzer. It will load your codes and try to guess if your architecture is good or not. And maybe try to tell you how you should focus that. Your talk also had something related to that. I mean, our talks were really similar but focused on completely different things. That's impressive. Also code metrics, different styles of code metrics from different researchers. We have a Knowledge Island that you also talk about. I think that I have the same talk here. And automatic library splitting based on code levelization. This is nice. So take a look here on the first. Ah, here I have some libraries that are really small. Those are okay. I mean, I have five, but I have libraries that are really, really big. I probably could find here in this a cluster of elements that are not being used by anything else. So I maybe I can cut this on a line and then I can separate those things into different libraries. And just by visualizing, it's easy to spot on where those are. We actually have this plugin working already, which is really cool. We implemented it last week. That's the reason that it's not in the stock, but we have that. You select, we generate a text file that you will take a look and then you will move your files into a different library. I mean, we tell you what to do. We don't do it for you. And go back. And AI-based hotspot visualization. You also talking about that, about hotspots and about AI. But basically without running the codes, what is most likely to block your program and what is the biggest possibility to fix? We do not have that. This is just an idea, but it's a doable idea because we can infer by using perf data, running the code analyzing and then showcasing exactly where your program will get stuck or trying to and how to fix. And if you want more plugins, you can implement yourself. I can help because this is completely open source. This is completely free. I do not want to know that your company is using it. I want people to use and tell me, Tomás, you are a horrible developer. Please fix your bugs because this helps me. But a few corporations already are using and they already have tools for that. But the tools suck. The tools are usually static code analyzers for package managers. CMake based on Ninja that generates graph of the building system or software installation packages like APT and Chocolati that will look for the graph of the installation things. But those things, they just care about the software, not the source files. They care about the compilation step. And that I don't really care because a compilation units has nothing to do with the maintainability of the software. And I want a software that I work for 20 years. And like I have here, I want a software that I work at for 20 years to still be useful in 20 years. I do not want people to throw away my codes because it's unmaintainable. So why did I create that? Because I got paid too. Because it's easy to develop locally, refresh the UI and see the changes. I do not need to wait for, for instance, GitHub or a Grafana or anything else to run on top of it. And Grafana and GitHub, there is a lot of setup involved. And this is basically a zero setup. You point the software to your source, you wait, and then you have the results. It's easy to integrate plugins because no software will do everything that we want. So we need to make a way for people to integrate the things that they want. And it's easy to integrate visualizations for the same reason. So success stories, yes, Bloomberg, it's using this already. Yes, KD, which is the biggest open source community. We have more than 2,400 people. It's also using this on some software. What else? For developers, this is open source and permissive. You don't need to pay, you don't need to tell me. I don't care. Merge requests, I accept and I will love you forever if you send merge requests. And plugins can be used for closed source organizations. They don't need to tell me, they don't need to push upstream. We are corporation friendly. And do I still have time? I don't know how much time do I have, maybe? Some. So to onboard people, because the software works on visualization, you can create a visualization for your new software, for your plugin, for your architecture. You can document that. And that can generate, because we also add that, you can generate source codes from the visualization. So it can ingest codes and it can also spit codes. So it's easy to get something small, allow a user to take a look and see, understand the connections and then work on top of that before just showing a lot of code for him and deal with it. This is a one million line of projects you need to deliver something by two weeks. That does not work. Architectural visualization is also good and dynamic documentation. This is really, really nice because documentation is important for the viewer and it will also be used when you are generating codes and it can also be ingested when we are reading the codes. And IPC from tax editors, this is really nice. So I implemented on codevis the bus. People here know what the bus is. No, it's a way for two applications to communicate with each other. So if I have visual studio code or C-lion, as soon as I open C-lion with a plugin, a tiny plugin for codevis, they can send a message to codevis. Load this file that I just opened so I can navigate on my editor, on my preferred VI or Emacs. As soon as I navigate, codevis will show exactly where I am, on the file that I am or on the class that I am, which is also really cool and it helps. It's not something that will save your life. It's a helper and it's a tool for you to gain knowledge on a source of codes, code source, documentation plugin and more meaningful templates. That's also a bonus. You can generate source code from C++, Python Fortran, based on Python templates. So if your company hates the templates that I use for mine, change it. Don't even need to recompile the software. And you select what you want to generate because you might be showcasing too much for the user and he is just interested in a few things. And if you generate all of the classes, all of the files inside of those packages in a way that's already compilable into a library. It has no contents. It has the functions, it has the classes, but the functions have no contents. That's architectural, that's not architectural. Those are algorithmic. And because those are algorithmic, codevis does not care yet. Maybe we will care in the future. So summarization. You can create architectures and dump its skeleton. You can analyze pre-existing core architectures. It's a plugin-based system so you can create something or reuse something that someone else created. It's good to study the architecture of a software. It's, this is my talk. I can showcase the software for those that are interested. It's running here. And my name is Tomas. I am a C++ developer. Please save me for myself. Yes, we do not have a plugin yet for domain-specific languages, but it's easy to add. We added support for Fortran in three days. All right, so basically we have a database schema that was taught to hold any kinds of language that it's either structured or trait-based or object-oriented. So if you try to parse Haskell, it might be a little bit of a problem. But for anything, Lua, JavaScript, PHP, I would say that if you have a library that knows how to parse those languages and you just get the tokens, it's a one- or two-day job to implement. We parsed the kernel, yeah. And we parsed the kernel. It worked in the first try, which is strange, really strange, because we wanted to have something in C to showcase the visualization of three functions. Because when we are dealing with C++, we have classes, so we do not show functions. It's easy to bundle 20 things, 50 things inside of just one node. But as soon as you have functions, as soon as you have one function called in another, as soon as you have callbacks, this changes. So we used the kernel and we used SQLite, which is also in C. We had a crash with SQLite. We fixed the crash. We also send them a patch, but they don't accept patches. They will write everything that you send. But yeah, it works with the kernel. Yeah, it will not handle that. So what happens is, as soon as you generate, as soon as you use kconfig to generate your kernel configuration, you use bear to go through the makefile and generate a compile.comments.json and then we ingest that compile.comments.json. So if you have multiple different ways, you need to create multiple different views. It's impossible to do what we requested because we are using clang. But we are not using clang as just a power server. We are using clang to expand all of the macros in analyzing time and also to handle all of the template expansion from C++. So we actually know every single template and every single possible expansion that your software did. But as soon as you do an ifdef zero, we lost that block. Any more questions? Nope, thank you for your question. I like to talk about the kernel. And this is it. Different kinds of architecture, bare metal, different kinds of operating systems. So we build our CI-CD service to satisfy our requirements. So that's OAR pipeline. The Kamba CI is the base, hello, of the OAR pipeline. So I will start with the Kamba CI first. It is the scheduling and executing system, which is the most important thing of any kind of complexity systems like building or testing. Often need the infrastructure to schedule resources or executing your job in the remote node. So let's start with the Kamba CI. Kamba CI is declarity, is declarity, all the job of it, you submit, all the job is declarity, you just need to submit job YAML for announced parameters you need or the operating system, the kernel or the task box node, or you need to just submit the YAML and it supports different kinds of kernel, different kinds of machine, different kinds of software, often to serve your requirement for building or testing or benchmark or just borrow a symbol, simply to borrow a machine to use. That's what I'm talking about. Just if you need a barrel metal or a VM or container, all of them are supported. And Kamba CI is a microservice architecture and maximum of 16 containers. The picture of three layers is the three layers, is all of the Kamba CI and the red one is others. I will introduce later. For the bottom one, the local run job layer is about some setup scripts, some monitor scripts, like the main info monitor, the CPU info monitor executor, and the statistics script and the run job script, which to run your tasks or builds program. The local run job layer will happen in each task box you borrow. And the middle layer is the remote run job layer and it is about some service run in the remote manager node, like some life cycles, the main scheduler, all the queries, all the database and other service. And the red one, the red one, the project specific pipeline will serve, it is different kinds of application. So for different things, like if you need the building systems and that is OLA maker. So we build OLA maker, our OS building systems based on composite and the OLA maker is the extra service on the top of it. And the OLA pipeline I will introduce later, and it is also a red layer application. The OLA maker will, I will not introduce today because this is not my section. It will be introduced in tomorrow 2.30 p.m. in room 202. If you have any interest of it, you could join the section tomorrow. So back to my session, I will continue for a comma CI. Once the user submit the job email, the task box will grab it from the scheduler queries. Once it grab it, it will use these four ways to prepare the environment needed for the job instead of installing, because we know if we install install or operating system on the machine by the original way from like ISO or Qmil image, it will take a long time. But if we use the init ram FS, we use the OLA FS way to make it, it will be very fast to more specific. The init ram FS means we will make all the components into one in the FS file and all of the system of the software, the job need will run totally wrong in memory. So it will be so fast when you deploy. And in FS and CI FS way means we will prepare init ram disk file for specific operating system distro. And after we deploy the OS by month init, by month root FS, but it is read only, we will then we will then we will month overlay, we'll month the readable layer by an FS or CI FS, so you could make your job happen on the overlay FS. So it will be very fast, but the three, the top three ways have a shortcoming, that is they could not persistently store the data. So if you need to persistently store your data, you need to use the local disk way, which means the root FS will month on your logical value. So that is the four ways we support, so we don't do the install. So that's the way how Compact CI deploy. Arrow is inevitable. Each we, every time we submit a job and the job will perhaps the job will occur different kinds of value, like the picture on the left, the statistics show that there is several values happen. So for Compact CI, the Compact CI will submit different, well, submit the git bisect tracing for each of it. For each of the arrow, Compact CI will find the first back commit for it. It will consistently submit the same job with different kind, with different commit hash and totally find the first back commits for all the values. So it will make you easy to find out what is the root reason for the arrow. And for performance testing use Compact CI, you could make a good comparison between two commits, two OS, or two hardware to find out what is the different happen what is the performance change like the right picture show. That's just the basic function of Compact CI. And now we hear all our pipeline, the CI CD service. Because the basic function of Compact CI is how to schedule an executing job, but it could not make job and job like run automatically. So all our pipeline is to resolve this problem to make a declarative grammar to arrange all the stuff and drive them by events. So this is the architecture of the all our pipeline is the one plus one plus n plus m. It means one pipeline is under a certain declarative grammar like the right picture by several keywords. And it has unkindness of the right picture of defined templates serve for different scenes which predefined by the maintainers. And all the templates arrange m kinds of atomic jobs such as building or testing different kinds of scripts. Finally, the all our pipeline will serve for kernel CI serve for package CI or distro CI, exactly. About the declarative grammar like this, it is a very similar to GitHub actions one. If you need to define the web hook trigger, you just could use the on keyword and announce what kinds of event you need and get the variable address and the branch, exactly. Or you could also use on keyword to announce our Chrome tab trigger, which is we most to use the interval, how much interval we need to trigger the whole workflow. And the structure of the control flow of the pipeline, you could just use sequence and parallel sections like the button picture show. All the red row is an announcement of our job and the parallel and sequence well announced is represent the structure of the control flow. So it's no limited to announce how many jobs are parallel or how a complexity structure it is. Like some declarative grammar used in GitHub action or others, they will basically, they are limited the structure's complexity. So this kind of grammar just make it unlimited. So for the single rectangle represents, they are one atomic job. Like this is OLA maker create project job. And the job YAML is the YAML I mentioned before in the combat CS section. So it is also a declarative. The grammar also support the reference expression to embed in your YAMLs like this. You could just to reference value from the running context like the vars dot OS means this job while it's before it's submitted, it will reference the OS value from its context in vars namespace. And or you could use this grammar to announce a Python expression to make some string handling or just make our numeric unit. And the metrics keyword too, it is used to announce different kinds of combination like if you submit a job but you need it run in different kinds of distro, different kinds of architecture to test it. You could just use this keyword to simply make your single workflow to split to many, many combination. And you could use overview of metrics in the backside to simply glance how all the branch status is. And while your workflow, your pipeline occurs on arrows like a job is value, you could click the button to reproduce the same environment easily. Once you click the button, we will provide the web terminal which you could share with your partner to debug in the same web terminal or you could just use your own terminal application to enter the environment for debug. And while you finish your debug debugging and make your pull request, update your git repo and the web hook event will trigger the new running record. And it will make the new turns of testing. The target for OLAP pipeline is to support all the things of it like code CI means the assess control on your repo or kernel CI for the kernel repo assess control or the ISO CI like how to build your own distro image. Like the release CI is for the release management for your community. So all of the CI title is for different kinds of things. There are all of them are different kinds of template. And like in code CI, the antivirus or the SCA check license, they are all of them are atomic jobs supported by campus CI. So it means we could build a large ecosystem for different kinds of atomic jobs and use the template to arrange them easily. And all the things you need to do to build your pipeline is to select a defined template and make some fine trim like set some parameters or a change some little bit details and your pipeline will work. That's all. And if you have any interest of the CI-CD system in the open OLAP community, you could engage us by these ways. And any questions? Thank you. I did mention that other CI systems are atomic and the atomic words I named before it means just a kind of name. Because the job is the minimum unit of our system is job. So we call the job is atomic job. No, but not, I am not mentioned that other CI systems are atomic. Yeah, true CI systems? No, for your requirement, they could totally different and they are not different systems. They are all in the OLAP pipeline system. They are just different kind of YAML. You could learn it like this. So all of them are templates in the grammar I mentioned before. Yes, but you could just make a new template for your requirement. You can one template. Is there any questions? Thank you all for attending here. I know these last sessions I know everyone's a few tired, but here just to start and be sure that this is just not full of information but just really, really live with and kind of showing things. Okay, so let's just start. Okay, I'm a software engineer, a cyber agent and I'm the maintainer of the privacy projects. And I do GitHub and Chista. And my talk is about Pi CD, but before we get started, I just had a little bit share about the challenge of deployment on multi-cloud environments. So I'm listening here is some problem we face when we do the CI CD for the company wise. I'm from the CI CD platform team. And the first one I get here is different cloud, different tools. What I mean cloud here is say for instance is Q-Munities, AWS, GCP, for AWS, maybe it's Lambda, ECS or something, anything we can analyze. For GCP, we have Cloud Runs thing like that and if you have experience of working on some kind of environment that missing in many things like that or maybe it's not just your team but for your whole organization, different team have different way to choose the platform than you definitely face with a problem like this. And the point is for Q-Munities, we have some kind of community standard like ROCD or FluxCD, but for the other, what do we have? We do not have any standard like that. And the next one is about the securities and possibility. I know that most of people, I mean most of company and most of team do use CI for CDs, which means that we have to share credential to the CIs. What we actually believe is we should separate the CI with the CD. The CI should focus only on the testing and building the artifacts and the CD should be the one who take the rest from the artifacts to the cluster deployings. So it could be lead to some serious security issues if we mess up. I mean, actually use one for everything that why I played it as a second boy. And the test is how do we do progressive deliveries to Monty Clouds? For here, I have a new word. I mean not actually new, but pretty new if you do not actually work as an CI CD engineer, which is progressive deliveries. We know about the get off, we know about the continuous deliveries, which is CD, but for the PD progressive deliveries is kind of a term of using some strategies to deploy the application. If you hear about the canaries, you hear about the blue green release that is talking about progressive deliveries. You not actually fully deploy your application at once. You have canaries version to test it with your user and if everything is okay, then you go next, release everything. Or if you get any problem, you roll back automatically. That kind of simple immersion about the progressive deliveries. So what do we have as a solution for everything? I'm not sure you guys have the same image with me, but the first thing that come to my mind if I have to do support multi application kind, multi cloud is the Jenkins. I know it's still the standard for the for the, for the community and for the markets because I actually did that. I enjoy some conference in China's and in America as well. And I asked this question, what do you use at the CD for your application for the system? And the point is I'm having here is Jenkins and other quick is actually the CI do CD works. So basically the half of the answer is CI do CD things and the other half is oh, we have everything on the communities that were pretty nice. So we use everything on communities and we have our goal of custody for that. And just a little bit mentioned here is the our goal is strong compared to the flux CDs. And so what is the program? If the CD does the CI job, then it can lead to some kind of serious security issues for Jenkins. You may hear a lot about a lot from the committee which is the performance issues. Basically the Jenkins does not exist from the time when the current natives grow this much. So it's kind of understandable. And on the opposite side, we only have standard CD for the communities. So we have to do, I mean, for the other kind of application like I just said before serverless thing like Lambda, Cloud Runs, on telephone, something like that. It says you have to have tone of script that running on Jenkins or some CI like GitHub action and it's costly to maintain that much of script cross difference crowd and different application guy. So it's the gap of the markets. And we think that and we feel that. I mean, not just feel, but we have experience on maintaining that kind of complex CICD for the whole company-wise. After this, I will share you a little bit more about that later. But we start this by CD projects as a team projects. And the vision of supply CD is the one CD for all. And we want to mention here is we are CD. We're focusing on only on the CD. CI, hopefully we can use just Compass CIs open on earlier pipelines just as previous one. But for the CD, we want to make this by CD as the one CD for all. It's designed to overcome the challenge across our experience of the various application to various kind of platform and different cloud. And it has to be work easily and smoothly across the scalable organization. And a little bit of information to say about the CD. It's the Github-style CD tool and it's take multiple app and multiple provider as the first citizen. And it's work well with, it's fit for the larger scale organization, but it's work well with a single project as well. And it's currently started is the CNC VAT, Sunbox Projects. If you not know about the CNC VAT, it's Linus foundation sub foundation. And it's fully open source. Currently, we passed 3,700 commit already and we have about nearly 18, 80 contributor worldwide. And we're on the way to get our first 1,000 star. So if you like, please support us. And the thing here is there is a vote in the contributor list. And also we have some former PT-IT student as the contributor currently as well. They know me and I invited you. Invited them to be the contributors, Leslie. Next could be the five CD at the glance. I will guide you some feature of five CD and some research perspective of the projects. First, as I said before, five CD is support for rents of application. We currently support the communities, the AWS ECAD Lambda, the Z-Speaker Run and the Chair Form. And this is the, what can I say, the simple of what you need to actually use the five CD in action. We have here is the communities application manifest. We have deployment, we have service and the smallest setup and we need to prepare five CD configuration more. And it's kind of thing like this. Basically it defy the pipeline of what actually it's going and the pipeline is set of state that we defy was customizable for application guy and we have some kind of common state as well. You can actually mix it with the application guys, build up your pipeline. And this is what you get with this kind of pipeline. And the nice point is, it's work not just for communities but the other application platform as well. And this is a sample for the cloud runs. We see here is the same things. We have service or job mode, which is the setup manifest for the ZCP cloud run and we have the pipeline defy nation job mode and we get this one pretty same. And next retrospective is five CD integrates with the communities ecosystem easily. The five CD work well with customized ham on isto if you know whatever from here. And next thing I'm showing you here is the simple UI for the deployment details of five CD. We have here is the pipeline I just showed you before is combined of four different state and we have state look and clicking allow right here. This one is application detail page where I want to talk a little bit about this one. This is on this brief detection feature of five CD is compare the manifest and in cluster state and if you find any different is showing here and you can trigger the recycle of your application from here. Basically, by C work, we have four kind of trigger which is on git chain on brief detection on menu click on chains. I will talk about on chain trigger later. Next is the overview of the five CD architecture. This is the reason why by CD I said is work well with the last scale organization as well as working well with the simple project as well. We see here is that by C combined from two component one centrist control plans and many pipeline we call is an Asian. So in the simple platform team, I mean the whole companies only need to have one centrist control plans and basically the team of platform team when operating this control plan and it's a state phone component why the Asian is state led component every product team, developer team can install it inside their cluster and it's just a single binary anywhere. And we tested running as a Kubernetes port or just in the VM as in process or it's as an serverless handler anything is just work well. And when we talk about scale, we have one centrist control plan that actually the platform team need to care about why the company why scaling. We have new team, we have new project, we have new cluster just install that decision to the cluster.org and everything working well, at least in our case. And if you are a platform engineer team who maintain the CI CD for the whole companies your job is install the control plan and operating is, I can say that it's much less work compared to my previous experience when I have to maintain the whole Jenkins thing also with our goals, updating, if you use Kubernetes and you are goal when you update your Kubernetes cluster you have to ensure that our goal still working on that by Versa when you update our goal you have to ensure that your current Kubernetes cluster still working something like that for by CD platform team has to maintain the control plan only and it's basically a Kubernetes application but it's well containerized so you can actually install each component as well but it's recommended to just using ham install command and everything can go off and it's even easier when you look closely we have five component inside of control plan but these two is the cloud provide Kubernetes service which is say this one can be RDS just RDS or cloud SQL from GCP Aurora from, Aurora from LLS whatever else, I mean it support currently MySQL and Firestore from GCP and the Firestore is just S3 or GCP, GCS and I mean it's cloud provide many things but so the by CD architecture have we afford that by making the asian running in cluster your asian win has to create to actually touch the application in the cluster and also it has the way to connecting to the git quick pooling the application manifest and the control plan do not know anything about that cluster, your creator is saved in your cluster and that was the voice not leaking anywhere also every request from the asian to the control plan is pooling model is how about request only which means that the platform team who has maintaining the control plans do not need to know what and where your asian be installed it's just the place of just split restoring the application state on deployment.org so it's way safer also by CD have some security feature as well which is we have a built in secret management basically this one is template thing feature that support you can storing your things password or creator beside with your manifest and we have our bug as well I think it's fair enough for proxen ready things also for the SSO look in things okay so next one is progressive device as I said before progressive device is about how to automatically deploy the new version to subset of your user and then every yes any problems any correctness any performance issue before running to the old user to reduce the effects if you actually do kind of B2B or B2C service you can understand what I mean especially B2B they really really care about after the non code bug freeze bug freeze environment so it is the thing that we should care about if we actually care about our user and by CD have this state with names analyst state basically this state will perform some guy of prom chaos to the canary run our workload and actually testing something that you writing down and if anything wrong happen it's automatically trigger the run bug and it's not only for communities but the other as well oh sorry I missed the slide okay so internally the analyst state basically is the private agent connect to some guy of metrics exporting service lipo meters or data doc we currently be support by by CD and it's performed the prom chaos queries and we have kind of threshold or something like that and if you feel if you if you find something wrong then it's trigger the run back because next one is the deployment change feature of by CD we have here is kind of application finish then trigger next application then trigger next application deployments which I have here is simple use case where we have terraform like as an IAC for your cluster then after this one is updated then you update as you trigger the deployment for the communities application or cloud application at the same time and then you trigger the update for the next application is everything run on chain and another use case is we have developed environment application we have to test this one first then if everything fight then we go to deploy the staging environment then go for the production environments or as our company use case we have multi-listen application as well so we can release to the ACR first then US then Europe is minimize the impact to the end user by grouping them by location next one is check for key feature of by CD so if you know about the Dora I mean the four keys feature to actually has some inspiration of your development process you will understand what I mean for here we support two metrics for the inside to the application deployments we use deployment frequencies and the chain field risk okay so there is a lot more feature of by CD you can test by yourself we have a live demo here so you can access it and click around just need to look in using the github account then you can click around and viewing things one of the feature I want to talk right here is we can integrate we are CD things and we are focusing only on CD but we can integrate easily with any CI you can just using the CI and as last step of the CI you trigger the deployment by send an event to the control plane then the CD by by CD can perform the rest of the pipeline at the CD part okay so a little bit say about by CD at my company currently the by CD handle one centric control planes can handle about 3,500 application