 Voy a presentar a nuestro primer orador, Mr. Reza Shafi, gerente de Producto, vicepresidente de Producto de OpenShift en la Corporación Red Hat. Gracias. Buenas, todos. Es bueno estar aquí en Buenos Aires. Me gustó la ciudad. Yo venía aquí casi todos los días cuando estaba en Mulesoft durante cuatro años. Y hablando de eso, ¿están en Mulesoft? ¿Quién? No, nadie. Ok. Entonces, hoy quería, obviamente, hablar de la OpenShift Roadmap y la OpenShift Vision, pero antes de hacer eso, quería tal vez hablar de una cuestión fundamental, que es, ¿por qué se importa? ¿Por qué se importa la OpenShift en lo que hacemos todos los días? Y para hablar de esa cuestión, voy a volver. Voy a ir a 2003, en la entrevista de Harvard, que fue un artículo que salió, que dice, no importa. Y por eso, no importa. Obviamente, es un juego de la palabra, que dice, IT, no importa. ¿Y si recuerdan este artículo, por alguna razón? Cae una gran parte, algunas personas están nado en la cabeza, una gran parte de la preocupación en el sector de IT, en el sector de texto, en el sector de texto. Recuerdo que era un consultor joven en el momento, y me empezó a preguntar, si Nicholas Carr es correcto, entonces mi futuro en IT no podría ser tan brillante como me esperaba. Y en el caso de que Nicholas Carr estaba haciendo, es que IT es significado lo que se llama tecnología de comodidad. Entonces, en el momento, Nicholas Carr fue sugiriendo a las empresas que no se inviten en IT, porque es como la electricidad y las ruedas, es significado ser una tecnología de comodidad que es mejor compartida naturalmente por un par de propiedades y que la innovación no va a suceder por invistir en IT por las empresas individuales. Y si read el artículo, en realidad, suena bastante convenciente. Ahora, tras 15 años, y miré la semana pasada, y tengo que decir que creo que Nicholas Carr fue malo. He usado Uber el servicio de cariño que ha completamente destruido la industria de tarjeta y la industria de transporte en algunas formas, completamente IT-based y completamente innovativa. He usado Concur para, por cierto, hacer todos mis expensos. He cambiado completamente los expensos de 1, comparado a 2003 y he usado mi teléfono para browse un set de 500 filmes y decidir los que quería ver. He cambiado completamente la manera en la que las aerolíneas están innovando a ese nivel. Y por cierto, la toda la vez, he usado la cloud aplicando aplicaciones a varios servicios de computación en demand y obteniendo acceso a la CPU y memoria y así como si fuese software. Entonces, el cariño de Nicolás fue probablemente malo pero, espérate un minuto. El último ejemplo que he usado quizás es que es correcto porque la manera en la que he recibido todos los recursos de computación cuando he usado AWS que tiene suministrosos servicios de computación puedo ir a un par de providers de cloud y puedo usar sus servicios en espacios de consumción y por eso no podríamos decir que la computación ha sido comoditizada? Entonces, ¿qué está pasando aquí? ¿Has sido comoditizada o no ha sido comoditizada? Yo diría que la respuesta es que depende y que hay muchas layers para la ecuación, al menos tres. Si piensas en la infraestructura de computación de una pura computación de automobiles de una pura perspectiva de yo solo quiero CPU yo solo quiero memoria yo solo quiero networking eso es en su manera de comoditizar durante el tiempo y los providers de cloud definitivamente son la manera de hacer eso y eso es genial pero eso no significa que la innovación se está tomando en las layers y que esa tecnología como un entorno no es una sorpresa de innovación porque si vas solo en una layer arriba o lo comúnmente se llama middleware innovación es todavía estagrante tienes cosas como Kafka tienes cosas como Vitesse hay grandes montes de innovación que uno puede lograr usando nuevos servicios de middleware que están apareciendo en la tecnología que hemos visto en order to write applications which is the next layer above that and applications of course are the things that give us the most innovations by being able to more quickly build the right applications and improve the right applications our businesses are going to be more successful and that source of IT innovation is likely not going to stop for a long long time Uber is an application at the end of the story it's interesting because I think Nicholas Carr and I'm not picking on Nicholas Carr too much here but was actually wrong on the electrical aspect of this as well in some cases it's interesting because I'm not picking on Nicholas Carr too much here but it was actually wrong on the electrical aspect of this as well in some ways electricity as well is the same thing high electricity is generated and the electrical infrastructure doesn't matter that much but if you go to the services that are built on top of it there is still innovation there today I was just at a Starbucks in the states where I was able to put my iPhone down on the table and it started charging that is innovation at the electrical services layer the new cars that are electrical and you can just plug them in and they start charging that's innovation at electrical services layer and of course on the electrical devices that innovation is not going to go away for a while there is an important difference between electricity and technology and really computing I should say that is worrying me where we are going and to demonstrate what that difference is I am going to go way back to when electricity was introduced as a commodity these are devices back in the 1900s that started using electricity and you will notice something peculiar about them so you have got a toaster on the left hand you have got at the top left a device to heat food and bottom left it's a hair straightener and so you can see that all of them use a light bulb socket and that's because electricity was first introduced for lighting and then people started actually building other devices to actually tap into the electricity and they said ok I am just going to use the light bulb as the interface that ended up to be a blessing in disguise because it provided right away a decoupling from the infrastructure instead of people creating devices that tapped right into Edison or Tesla's proprietary infrastructure in order to receive electricity they just used the light bulb and that decoupled the devices from the infrastructure I am worried that in today's world of computing as the computing infrastructure is getting commoditized the cloud providers who are basically the commodity provider are also trying to create services that are tightly coupling our applications which are the equivalent of electrical devices into their infrastructure when you use a service like Lambda when you use a service like Athena or Kinesis you are tightly coupled to AWS's infrastructure and what that means is if the equivalent of the toaster was built that way I was told last night that here in Buenos Aires there are two electricity providers Edis Sure and Edinor that would mean that whenever you move to a place that's Edis Sure and your toaster was built with Edinor you could not move your toaster you got to buy a new toaster you want to go to Uruguay, sorry you got to buy a new toaster that would not be a world I would like and you definitely don't want your applications to be that way so that is a big part of why I think OpenShift matters because OpenShift brings that neutrality layer that portability layer to the underlying cloud infrastructure it allows you to take the advantage of the flexibility and ease of what infrastructure as a service has effectively become yet be able to build applications that are decoupled from that while taking advantage of the flexibility and simplicity I want to talk about how OpenShift achieves that there is at least three pieces to the puzzle the first is Kubernetes the second is automated operations and the third is bringing automated operations to the diverse set of services that are out there and not just one cloud provider services and so let's talk about each of these let's start with Kubernetes Kubernetes was introduced a while ago 2015 I believe it's the anniversary of Kubernetes I saw a couple of weeks ago so you see a lot of blogs out there and Google obviously introduced Kubernetes it was based on an internal project called Borg that Google itself used to actually consume services and run all of their applications and Red Hat Red Hat worked very closely with Google in order to bring Kubernetes to the enterprise and to make it usable by non-Google consumers there was two companies that worked closely with Google to do that Red Hat and CoreOS and they happened to be the same company now for a while there if you were following this scene it was unclear what orchestration technology so everyone agreed that containers are the future but it was unclear what orchestration technology is going to win there was a couple of competing orchestration technologies out there docker swarm mesos and kubernetes being the top ones but I think at this point it is pretty clear that kubernetes is the de facto orchestration standard because all the other vendors who were trying to push competing technology including mesos and docker have jumped on the kubernetes side of the house and that is great because at least now we don't have to argue about what is the right orchestration technology and by the way just to pause for a second and talk about what kubernetes does and what orchestration is at the end of the story what it does abstract a way the compute infrastructure from the overlying applications that are running on it so you can say I'm just going to keep adding computes on one side to serve my compute needs by the top you just add applications and you tell kubernetes what the compute needs are and then it starts playing that perfect Tetris game in order to make sure that the right applications have the right resources so you don't have to schedule all of that information so kubernetes acts as that great neutralization layer and I would like to point out that red hat with our experience in OpenShift and being the first ones to endorse it really we have a great deal of expertise that we bring to the table and we've been contributing greatly to kubernetes so red hat heads a dozen or so special interest group along with google and we also have the most neutral view on how it should be tied to compute because we don't actually come in with an agenda that it should be tied tightly to the underlying compute infrastructure as opposed to some of the cloud providers so that's one piece of the puzzle let's talk about the other two pieces the puzzle which was automated operations and bringing the simplicity of the cloud to the services that are above the kubernetes layer those two pieces of the puzzle a big part of how red hat is solving that problem is through the acquisition of coro s and the integration of coro s technology into OpenShift and the future of OpenShift so the capabilities I'm going to talk about next are going to be the building blocks that we acquired from coro s and how we're integrating that into OpenShift going forward so coro s had three main products one was called coro s container Linux which is container optimized operating system with over-the-air updates I'm going to talk more about that coro s tectonic which was the competitor to OpenShift kubernetes distribution it also came with automated operations and over-the-air updates what I mean by that by the way is that it brings the simplicity of the cloud no matter where you're running it so if you run tectonic on AWS or on your premises on OpenStack or on Google it doesn't matter you will continue to get updates that will pop up just like your iPhone and say an update from version 1.7 to 1.8 of kubernetes is available do you want to apply that you press a button and within 10 minutes all of your nodes are updated the cluster automatically backs itself up the LCD state is backed up so that if something goes wrong you can always restore it automatically these are the capabilities we typically associate with the cloud but we want to really have anywhere and we just want to not worry about it and that's what tectonic brought to the picture and finally coro s quay which was the image registry that allowed one to actually store all of the images and track their changes and point all the applications container to that so I talked about that so the big part of what coro s brought to the picture is day 2 operations automation of day 2 operations installation upgrade backup failure recovery and so on so that you don't have to worry about it no matter where you are but it brought that to the picture at the operating system and kubernetes layer but the good question that you might ask is what about everything else I use on top of it I have Postgres database MySQL database I have Kafka running on there I have Elasticsearch running on there Redis in memory data grid Fuse, what not they all need to act like the simplicity of the cloud for me to be incentive to use those services versus the cloud provider services that have the issue with the light bulb going all the way down to the provider and so with that we introduced at KubeCon Copenhagen about 3 months ago something called the operator framework it's an open source project if you google it you'll see the github organization and the operator framework is all about bringing the toolkits from the coro s technology to all the service providers and to our customers so that they can use it to build cloud like capabilities into their services so that all of the services I just mentioned behave like the cloud on top of Kubernetes and as soon as we introduced the operator framework we received a great deal of support with 60 plus ISPs at the onset coming in and saying they wanted to use the operator framework and certify on top of OpenShift and if you go to the operator framework awesome operator repo you'll see a list of 55 existing operators that have already been written that have that there's things like Redis in there there is WebLogic one there is Kafka one there's many many types of operators by a variety of vendors but that's exactly what we wanted our goal is to create a diverse set of services by all the vendors out there so that one cloud provider cannot held as hostage to their services alone and that is succeeding so that's great what does this mean for OpenShift so we're taking all that technology and bringing it into OpenShift and exposing it as first class citizen so going forward OpenShift and operator console as well so not only there will be a console for the consumers of the cluster that want to deploy their applications you'll have a console that gives you a much more system at mid-sensitric view of the cluster that will show you updates that are coming up so you can apply the updates that will give you monitoring and metering information which I'll talk more about as well in the coming slides I want to just go back now to the operating system I mentioned that container linux was one of the technologies that we acquired from CoreOS Atomic and container linux are being merged into a new operating system that's called Red Hat CoreOS and the Red Hat CoreOS operating system will have all the qualities of the container linux operating system so it will it is going to be container optimized meaning just enough operating system with a small surface area for attacks it will also have over-year updates that means that you are going to be able to receive updates that can be dynamically applied to all of your operating system layers running with container linux we have today over 200,000 nodes that are registered to receive automated updates that means every two weeks when we push an update 200,000 nodes get updated automatically and this is what is coming up to Red Hat CoreOS that is going to be the nucleus of the new stack for OpenShift 4.0 which is the next version of OpenShift that will have the technologies I will talk about so that means that the installation an upgrade experience of OpenShift is going to change fundamentally we are going to a world now where the first layer will be Red Hat CoreOS and on top of that Kubernetes with fully automated operations so we go to from this world where you have to do a lot more management of the infrastructure to a world where the whole infrastructure is automated operations with automated backups and automated tuning and automated upgrades and so on but as an operator you still want to see what is going on so you are still going to be able to see these upgrades are coming in let's see what's going to happen the dry-iron is looking good first I'm going to apply it to my pre-production environment that worked now I'm going to apply it to my production environment that worked if it didn't work you can roll it back these are the things you want to be able to do as an operator you want the system to automate it but you still want to have control part of having control is going to be monitoring and we are bringing in Prometheus we are one of the main contributors to Prometheus which is becoming a really important project for monitoring purposes out there so we are building in Prometheus into OpenShift with out-of-the-box dashboards both embedded within the operator console but also with technology called Grafana that allows you to view and slash and dash the data in very powerful ways baked into our console now another feature that is going to be exposed is going to be called operator metering and what this allows you to do is to see who is using what percentage of the cluster as you go to OpenShift you will notice that one experience and I'm sure those of you who are using it that way it's an aggregating technology all the different stakeholders that had different applications running on their own infrastructure are now going to be deploying the applications on Kubernetes which is abstracting away the infrastructure layer so guess who is paying the bill for the infrastructure now it's not going to be the application owners anymore it's going to be the Kubernetes provider and if you are the people who are bringing in Kubernetes you have to worry about paying the bill for the infrastructure the application owners don't anymore that's good but you're going to have to explain to your CIO why this bill is so big and so operator metering allows you to explain that sometimes we call this metering and charge back sometimes we call it metering and shame back and that's because what it allows you to do is to show the division this application is using this much of the AWS bill this other application is using this much of the CPU and this much of the memory and it comes down to this much dollars an important part of running OpenShift at scale now I want to talk a little bit about the registry so who here has heard about a technology called Helm charts or Helm ok a couple of hands so Helm is a technology used to describe application Kubernetes world and we are betting quite a bit on Helm so going forward we are going to double down on Quay the image registry support for Helm and what that's going to be allowing us to do is to basically treat your own applications if you have mobile apps that need to run on Kubernetes if you have any enterprise app Spring app what not you represent them by Helm charts that describes what Kubernetes constructs they need to map to and then in Quay we will be able to represent those as applications even though they span multiple image and multiple Kubernetes manifest and then be able to tie that to a Helm operator which comes out of the box with OpenShift and which allows us to actually then have the full CICD pipeline automated for you so as a user you just submit your code to GitHub and if you have a favorite CI pipeline it doesn't matter which one but as soon as it's done it's put in Quay's Helm repository and you have the cluster listening to that Helm repository and automatically getting the updates there ok, that brings me to my second to last slide we are going to exciting places with OpenShift 4 which is going to be our next big release I know OpenShift 3.10 was just released 3.11 is coming up some elements of what you just saw such as metering and charge back and monitoring built in but it won't have the automated operations capabilities with OpenShift 4 we are going to have the automated operation capabilities the monitoring capabilities we just talked about we are going to have an extended catalog of applications that are vendor backed that have automated operations with them certified OpenShift and we are going to have the developer experience capabilities in terms of integration with Helm charts so this is the big release for us, very exciting release that is coming up currently planned for January type timeframe but we really believe this is a game changer and the reason why this matters the reason why this matters is because OpenShift is picking up steam OpenShift is being used by more and more of our customers and I believe that the reason one of the big reasons why OpenShift is so popular is that we are able to abstract away the infrastructure yet give you all the benefits of the clouds and of containerization without having you to have applications that tie you all the way to the cloud providers and I think that's important so work with us and talk to us about OpenShift and OpenShift 4 and I look forward to talking to you in person as well thank you for your time today Muy bien, bueno Muchísimas gracias Reza Recuerdo la dinámica de este evento es totalmente colaborativa entonces en el coffee break está totalmente disponible Reza y cualquiera de los presentadores para acercarse y contestar cualquier pregunta que tengan ganas de hacer si requieren algún tipo de asistencia estamos ahí para acompañarlos