 Thanks Santiago. Thanks everyone, welcome. My name as he mentioned is Andrew Block. I'm a senior principal consultant with Red Hat. I travel to customers throughout the globe. These are small customers who are just looking to get into containerization to large enterprise customers. I'm not going to lie, Exxon Mobil, who I'm going to be bringing up in a moment is one of the larger customers. So there are typically two customer user bases that I work with when working with all these customers. One, it's going to be from an application development side, they're looking to take advantage of containerization, application portability, being able to scale as they need to. Or from an infrastructure side, they want to be able to manage the platform. They want to be able to get away from managing old virtual machines. They want to go ahead and start taking advantage of a shared environment for them to provide to their customers. Now, what makes the Exxon Mobil story very unique to me, and I'm sure to all of you as you'll soon hear, is that they are an infrastructure team, but they're consumers of the platform. They're not application focused at heart. They're infrastructure. And some of the use cases that they have developed has not only transformed their team, but has transformed how the organization manages their own infrastructure. So I'm excited to be able to introduce to you the Exxon Mobil team. Guys, come on up here. I'm Lucas Paglia, I'm an electronic engineer and I work on the automation of infrastructure. Before starting with the topic that we came to present, I wanted to tell you a little about the company. Although it's not very open to the public, and many of you don't know what it is, Exxon Mobil is the number one oil in the private sector. It's also in the petrochemical energy industry. Its initials are about 135 years. It started as a retailer of Kerosene in the United States, a small region, and it's currently the main company in the sector, and it's recognized in almost all aspects by the rest of the competitors. We have presence all over the world. They're not just offices, refineries, franchises and service stations, and platforms in the ocean. They're sites of different types. What they have in common with these types is that they require connectivity services, which is fundamental and also accessible to applications. In total, around the world we're more than 71,000 employees, particularly in Buenos Aires, we're around 1,000. These points that have been marked are the service centers, the main offices which we provide services to the rest of the sites in the world. They require us to operate. Another particularity, perhaps Exxon Mobil is better known in the country for its brands like that, the service stations or Mobile, the lubricant brand. We continue. To support this infrastructure, we have around 30,000 devices. These are network or servers, mainly. The problem with this is that being so big and constantly changing, the issue of scalability and maintenance of all our corporate networks and services is very difficult. What forced us to look for an automation, a solution to all this problem, has already been going on in our hands. This is a huge number of devices and with the amount we are in all these sites, everything is already being complicated to handle. Many of the operations that are currently being done are manual. It is being focused in the last three years, as I am going to show you now, to automate as much as possible. As they were saying before, we are not an application team. Our focus and our core business is infrastructure. I am particularly involved in the network area. Mauro also, Lucas is more on the hosting side of servers. This is a bit of the evolution of Exxon Mobil, particularly the infrastructure area. It has been a more DevOps culture, to give a little context to this. Initially, as I said before, it refers to three years ago. We were doing simple operations, like legacy, as you will think, to enter the team, configure it, all by hand, all by hand. There were perhaps good practices or things prepared that were calculations. There were few applications or scripts that were isolated. They were owners of the same teams, complicated things for the style. And if there was a small application, it was in virtual machines, which were not within our reach, but it was already the area of knowledge. It should be noted that there was no developer knowledge, there was no developer within the teams. They were all engineers specialized in that particular area of infrastructure, security, networking and so on. The first change that arises from this was in 2017, with the introduction of Docker. When containers first arrived at our company, what made this easier was the process we had to get access to hosting solutions, as in the BIMS case, and made this easier. It gave us a lot more flexibility to start creating. At that time, maybe there were people who had no developer knowledge, but it was more curious, so they started creating things. That is, this emerged within the employees that came up and down. We started experimenting with this to try to simplify the operations that we realized were always the same and that they could even be shared. We also began to consume a lot of the tools developed to be totally common, like surcom management, Git, in our case. There are all those things that were not common at all for us. Although there was an impulse at this time, we continued to contribute a lot of value, because it was difficult to translate it to a script that was the complete configuration or everything that we wanted to automate. For the year 2018 there was another big leap, which was the introduction of OpenShift in our environment. I did not mention it, but I did not have much knowledge in containers at that time, and what OpenShift came to solve was a lot of that work on maintaining the cycle of life of containers. It was very difficult to explain, for example, how an engineer in the web had to maintain containers, it was something totally new for them, added to the fact that he had to develop applications for that. So OpenShift made it very easy for us to get a platform on which to host our applications. That year was a great progress, we included CAC Pipelines. This was, unfortunately, a lot of repeated efforts in different areas. It was complicated to collaborate, because different steps were made in all the teams, also because of the location, how we are distributed around the world. It was also complicated, the issue of collaboration. So there was a lot of reuse and more. The good thing about this was the creation of many applications. At this time it was more common in the infrastructure teams to see people with a developer background. So that's how many applications began to appear, but many compared to the ones that used to come before. But that connector was still missing, it's like we automated, maybe in a more formal and professional way, certain operational tasks, but the whole process that we actually carried out was not in line, so we continued to see manual tasks. And this also changed a lot when we introduced Ancior as a framework of automation, among others that we used to all the teams. We had Ancior as a center of orchestration of all our processes, all our tasks, this already reaching the end. There was when we began to consolidate teams that were closer to an OBS culture, although they are not 100% OBS, there are already people with infrastructure that understand what they have to do, or at least understand how to consume their services, how to try to translate their infrastructure, their definition in code through Ancior, execute it. This seems to be an idea, for example, I want to kill a router and make the definition of the whole team, configuration and more, that Mauro is going to give you in more detail, but to make it an idea of the work that we do. Now what the guys are going to present, Mauro from the network side and Lucas from the server side, there are two real cases here at Buenos Aires. The case of Mauro is closer to the final stage where several teams are integrated to perform a task in common, to give a service to the real client, and it is not just a team function. In the case of Lucas, it is how they expose their services to their clients, it is the most internal group, but perhaps it is in a previous stage, they do not have an orchestration with many other teams to perform a much more provocative task. Well, one of the problems we found or the challenges we found when we wanted to start automating it was a bit what Seguiel said, the difference in background between people of traditional infrastructure and developers or people who wanted to start putting a little more with the code. This was a very big barrier and what we decided to do to facilitate all this progress and also consolidate all the services we wanted to try was develop a framework that easily allows you to start automating simple tasks and then also more complicated tasks. That is why we started developing this framework and we had to decide with what application or what part of all this process of configuration of network teams and maintenance of the network and we were going to start. At that time, the standard configurations of the network teams were saved in Excel, like many companies perhaps, and it was very difficult to consume those configurations. A network analyst to configure a team had to find the Excel, open it, fill certain variables depending on the team, copy all of that and paste it in the team as a copy paste. What was very cumbersome could cause many problems and it was not optimal. That is why we developed what would be the first application that we call standard configurations, which is basically an IPI that by providing certain information such as the hostname of the device, the operating system, the version, etc. returns the standard configuration for that device. This also allowed the collaboration of the infrastructure network teams since they were in charge of writing standard configurations. What we developed was mostly the application itself that did this interaction. With this we got many advantages. First, we started to version the configurations in Git, which was very easy to determine what changes they made, who made them, much simpler than in an Excel. Then there was what we call Single Source of Truth of standard configurations, and we also reduced time and errors. And we allowed, a very important factor, the integration with other applications. This application could be consumed by other applications like Ansible or by a user, since it also had a frontend. But after this, what we wanted to do was finally start to put these configurations in the network teams automatically. We realized that this was going to cause certain changes in the network and there could be, perhaps, errors, there could be problems. And we decided to continue with another application that was to verify that after a change, the team was working correctly. This is why we built our next tool. We call it Change Testing based on Jenkins, what it basically does is take a snapshot or a photo of the device before making a change, then the change is made automatically and then another snapshot of the state is taken afterwards. With this, a set of tests is carried out and it is verified if the state of the device is correct, is it desired or not. If it is not desired, the configuration can be reversed quickly. This was the first tool that we noticed that collaborated a lot with the infrastructure team and automation teams that had more background, more developers, since we also developed the tool itself, but all the tests were developed by experts on the network since they knew what they had to try and it allows them to extend these tests whenever they want without going through the automation team itself. So it was a very important tool. At the same time, we were building the tool to automate these configuration deployments. After some researches, we decided to go for Anciol since it had a lot of network modules and it was very useful in the industry for the network area. We started to create certain scripts in Anciol to check the configuration of the teams, to measure the teams, etc. We realized that to scale, we needed to run these scripts in a platform. We decided to start with AWS as a quick start for those of you who don't know, Frontend is a platform to run Anciol playbooks that allow you to do key management, schedules, dashboards, a lot of things that are super interesting and that are useful when you need to scale and have a report or an inventory. It's an Open Source project and it's sponsored by Red Hat, the Enterprise version called Anciol Tower. This was one of the platforms where the engineers started to put playbooks and we're still on this day that they start developing more for themselves, but any infrastructure team can start developing in Anciol and run with that platform. Finally, to finish the framework, as we call it, what we have are the sources of truth or database, which in general are many database that we have to provide different information to other applications, like the inventory, we also save reports, etc. That's how we built this framework that was advancing a lot since it started and now we're in a stadium where a lot of Red analysts collaborate and put new features in this framework that the automation team developed. It was a great change. All of this is deployed in OpenShift. This allowed us to scale much more easily. We went through the stadiums that Sequiel said, the Standard Configuration was first in a Docker container in a virtual machine, then we moved it to OpenShift and it allowed us to scale all of the attributes that we have. The same with WX and the Change Testing tool is all in OpenShift and it also facilitates with the Pipelines the deployment of new versions. It was a big change. And to finish, I wanted to share in a very brief way two cases that we are about which we made them about the automation framework. One is the automatic media of Red teams. Basically, what this does is that every certain time it scans all Red teams, verifies that the configuration that it currently has is the same as the standard that it takes from the other tool and if it is different, it puts the standard configuration on the device to maintain reliability in all Red teams. This is all in automatic form and the other is in the case of a cyberattack to Red, we have the possibility to isolate Red by means of ANSIOL to protect us. So that would be all, Lucas. Thank you. Well, on the side of servers, many of the 30,000 devices that SQL mentioned are obviously servers and the usual requests that we have are obviously the high and low servers and the access control. What we realized was that the way our clients had to take requests was directly contacting members of our team or through a generic system where the user, in this case our client, had to type what he needed and in a way not standard. What resulted was that there was usually incomplete information because we had to put it or we forgot to put something or we were in disorder and also the format was not standard which did not allow us to automate what would take that request. We had to see by hand all the requests, see what information there was, what information there was, talk to the client and that generated an idea back that in the end what it ended up generating were many delays in the execution time of that task of typical operations that today could be much simpler. Our approach to solve this problem was to do a web self-service portal that is to say an interface between the user and us where the user could enter all the information he needed and it was always complete so as not to let them submit a ticket or enter a request without giving all the necessary information and also we could control the format of that information which then allowed us to know the format, to automate that is, a client from now would be able to enter a web form, complete all the data, if he did not have all the data he could not complete that form once completed the data, send us the request and instead of seeing it manually we could send that to a computerized script that processed access controls. Finally, what we added was a database of data to save statistics to be able to see how this helps us improve the time of response, how many requests are accepted, how many are rejected and be able to control all that and have a feedback of everything that is happening and how this definitely improves the standard process we had before. Well, here what I wanted to tell you a little bit was the story of how this was evolving within the team. The first thing we did was an environment without containers, a Linux and what we realized with that was that every time we entered a new member of the team it was impossible to replicate that environment. It was like, there was to be a way to install all the packages again and something was always missing and the worst thing we thought was what would happen if we have this one day in production and we have to replicate the production environment how long would it take? The time that took that to generate the environment again made us think of alternatives like containers. So the first thing we thought was to move everything to Docker. At the end of that time, besides the optimization of resources it was the power to generate all the environment again for a new member of the team in this transition to the DevOps team that we were having. When more people from operations wanted to move to development there had to be new environments and obviously to generate production when we want and in the way we want. Finally, we moved to OpenShift because in a DevOps world there were many things that we didn't have that were, for example, the fact that we could scale many environments to have high availability and as we were working mainly with web formers we needed security on the web pages. And in OpenShift, for example, adding security to SSL is as easy as building a checkbox. And already in an open and close of eyes we had a lot of tools that we didn't have in a separate container. Here we had an orchestration tool that I said before. All this code, a practice that we started to take into account was to use version control but also to use version control that allowed us to work collaboratively take all the previous versions that we had in our code in case something bad happened and we had to cover an earlier version. We implemented a model of what the branch of the repository was. So that we could separate what was production, what was development and also this transition of the box that when I knew a member of the team we could work in two functionalities at the same time without stepping on us then being able to merge it all together and what gave us control of the version was that. We were already working on other initiatives like, for example, Ansible but here we added, for example, the branch model for this project. And finally, what I wanted to show you was more or less a diagram in blocks of what would be the application. We have on the one hand the user but with the front-end web that would be the interface for the user all of that is supported by a Python backend all of that code is under version control as in the previous disk and now that we had the format of the standardized requests we could automate the whole process. So all those services would also have to interact with our legacy ticket system and everything that was working before. So there we have a communication between the legacy ticket system that should send the data to what would be another microservice of Python that is blocked up there that is in charge of recouping the data. We are starting to use non-relational database and data visualization models that consume the data that are in the non-relational database so that we can have everything visually and easily to access all the data and see how this is progressing in time and how what we are doing will be the response that we have to the users. Before we go to the final conclusion what I just showed you guys is the lowest level that we can really show what the company allows us I would like to talk much more about that but we can't put it here unfortunately To conclude this I really wanted to highlight our perhaps the biggest challenge as I was saying before we are infrastructure people we work with people very focused on that and perhaps different to a developer applications whose work or uses this only for the hostages and other applications our work doesn't end once the application is created but together with that we test the service to automate all the infrastructure and have the service available and make it continue to grow really the knowledge to distribute the knowledge between networks and developers even server people is a mix of the three to generate all this it is very difficult at global scales too as I said before to share all this and avoid to generate more effort all creating the same once and again we are still in that process we are still exploring a lot of teams that come to the last stage that I mentioned several that are already integrating new technologies like cloud others thinking things related to artificial intelligence and more all for infrastructure really that is our goal and we wanted to share with you all our evolution in this process thank you very much