 My name is Zach Berry. I'm a machine learning solution specialist at Red Hat. Today I'll be discussing DevOps versus MLOps versus AIOps and a disambiguation and what this means for the discipline of information technology. The agenda today, a little bit on my biography and why that might be relevant. A background of the time, you know, the situation that we're presented with and a glossary and disambiguation of some terms and discussion of how those terms didn't relate with each other. And then my attempt to peer a little bit into the future. So my biography, I started out as a Linux admin in the .com era and I was on a DevOps team back then. I've come to realize before the term DevOps was coined. I spent a year and a half as a trainer and then a decade as a solution architect for Red Hat working with strategic accounts, mostly out here in the Western US. In the last two and a half years, I've been a solution sales specialist here for Red Hat. And the reason I bring this up is just to say that this presentation is very much from the perspective of an IT veteran and is mostly targeted towards what other IT veterans might need to learn about data science and data engineering to do their job. I'm not a data scientist or data engineer. So I'm going to be trying to communicate what I've learned about that field to an IT audience. So a little bit of the background here. Earlier IT transitions that I think are of note here, proprietary software towards open source software big iron towards community, a commodity hardware, physical servers towards virtual servers, a throw it over the fence methodology towards DevOps and throw it over fence means that just code releases would just be produced sort of behind the veil and then would appear to operations teams and then for them to run. And if they didn't run in production, that was the problem of the operations team. There's been a moved from waterfall application development towards agile methodologies, which there are several and app configuration and automation has moved very much towards containerization towards orchestration. And this isn't to mean that automation isn't a key thing like obviously that's still in place but fewer and fewer organizations are trying to use a configuration tool to automate how application deployments work. Containerization seems to be a much better tool for that challenge. And there's been a move towards on demand cloud style resources, right? And I don't know what to call the pre cloud time. I'd be interested for interested feedback as to what we should call the pre cloud era. And I'm defining cloud here in this case as resources being available on demand where you just run a command or call some sort of API and the resource becomes available rather than having to file a ticket and then through some sort of human action, the resources is provisioned, right? And cloud of course means a lot of other things but that's the definition I'm using here. And data, well data transmission storage, there's a lot happening in that area as well. I'm gonna leave that out of scope of this talk because I don't wanna butt off too much with one discussion, okay? So we have a set of earlier transitions here and I guess my point here is that I believe that the transition towards using artificial intelligence, data science and machine learning techniques is going to in hindsight in a few years be seen as a transition of the same magnitude as these. So what's next, right? What are the questions we should be asking ourselves here? What can we learn from the earlier transitions? And not just that what do we learn as to, yeah, it's a good idea to use DevOps but what do we learn about how people and organizations work? And what are the lessons that are, what are the sort of deeper lessons from those earlier transitions that we should be taking to heart and how should we apply those lessons to our benefit today? A little bit more background here. There's a concept that I think is very much relevant and is not commonly discussed in IT but I think it should be. And that's the concept of the Red Queen and this comes from Lewis Carroll from Alice through the Looking Glass and there's a scene in the book and obviously the film adaptations as well where Alice is running, running away from the Red Queen in her minions and she's finding that as fast as she is running that the ground underneath of her is moving just as fast and so even though she's running she's not actually going anywhere. Well, this idea has been picked up in the context of evolutionary biology which is where I know it from and it discusses this idea that any sort of advance, any sort of forward motion should be seen in the context of a relative physician with your, against your competitors, right? So all of us are always running hard but that really just creates stasis unless you can gain some sort of advantage vis-a-vis your competition or if your competition is somehow moving faster than you and I knew this concept well from a book called The Red Queen by Matt Ridley. So I think this is an important idea and I'm gonna come back to it a few different times. So let's do a glossary and a somewhat arbitrary disambiguation of some terms. So first off on DevOps and there are a lot of different definitions for DevOps. Let me give you the one that I'm using here. DevOps is an enterprise capability for continuous software delivery that enables clients to seize market opportunities and reduce time to customer feedback, okay? Great, so continuous delivery sees market opportunities when they're available, reduce time to customer feedback, okay? That's from IBM's Kevin Minnick in 2013. Other DevOps features of note that I, from other definitions I think are worthwhile to bring in here, many definitions discuss the ability to experiment, fail, and learn as a small team, clearly defined external commitments and expectations, measure outcomes, and match responsibility with capability. Okay, so how does Agile fit in here also, right? And so Agile and DevOps are two distinct concepts but Agile very often, or DevOps very often relies on Agile so the two should be discussed here though the separation between the two should also be acknowledged. And there are three stances here from the Agile manifesto principles that I think are worth pointing out. So number one, deliver working software frequently. From a couple of weeks to a couple of months with a preference towards shorter timescale. Business people and developers must work together daily throughout the project. Build projects around motivated individuals and give them the environment and support that they need and trust them to get the job done, okay? So that's DevOps and Agile. So Agile I think should be thought of here in its broadest sense, not just in terms of how could be used to deliver a web application or a microservice. And the example that I'd like to give here is that in the time of COVID-19 with social distancing and in my case two parents working white collar jobs from home and my daughter attending a hastily organized kindergarten, I attempted or well I'm currently still attempting to implement Agile methodology in terms of how we handle our daily workload. So as an example task list on the right here and we have our daily 7 a.m. stand up, review the backlog and review and add new tasks and prioritize. So what do we learn from this? Well, we learn certainly that individual talents are better for certain tasks, right? Individual specialties still matter. My daughter is much better with Mandarin. I'm much better with salesforce.com and that we learned about our constraints via failure and moving forward. So the reason I give this example is let's try to think of Agile in a very broad sense here, right? Let's think of this as just as a way to take in incoming tasks, be able to provide feedback and set expectations about what we're going to be able to accomplish in a given set of time. A word here on data science, artificial intelligence, machine learning and deep learning. Deep learning is a subset of machine learning which is subset of artificial intelligence, subset of data science. For most of what we're gonna talk here, I'm gonna talk about here, I'm gonna focus on machine learning just because I think it's the easiest in many cases to connect back to business and is the most widely applicable currently. And so let's review just some of the real basics of machine learning, right? So number one, we set goals, we gather prepared data, mark the data out based on what we know of it. And then we use that source data to generate a model, deploy that model, start applying novel data to it, implement interfaces to those models, right? Basically like how is the model actually called in process? And then we monitor the performance of the model, right? Is it giving the right kind of results? Is it actually performing well in terms of like a context of regular software performance? Is it efficient? And then we loop that back into ML model development going forward. And then we also would take what we've learned and provide feedback to the business, right? Because remember we're supposed to be working here in tight connection with the business that we serve. So let's disambiguate some terms here. DevOps, MLops, and AIOps. So MLops, I'm defining here as applying DevOps techniques through the challenges that are well-addressed with machine learning. And I'm defining it as MLops as being applied to the challenges using DevOps to apply to the challenges rather than applying DevOps to machine learning, right? Because remember it's the outcomes that we're concerned about not as much the process. So if we find that machine learning is not actually the best way to address a problem, then we would surface that and give that as a corporate feedback and perhaps qualify out that particular challenge. And then AIOps as defined as applying machine learning back on DevOps and IT itself. So then I admit there's a certain turtles all the way down a situation that's created here where we certainly could be using MLops to solve business challenges. And then once we had all of these great machine learning techniques available to us, we might, in addressing our agile backlog, attempt to use the same techniques on site reliability or on something related to how our software operated itself based on the data and telemetry that the application itself was throwing off, right? So MLops is the, using DevOps for machine learning related challenges and AIOps is applying these techniques back on IT itself, back on IT and DevOps itself. So peering into the future, right? So what have we learned from this? What should we be thinking about looking forward? So first off, a quick digression. I'm talking a lot about the impact of machine learning and artificial intelligence on IT. And commonly I hear sentiments related to, oh, that basically the robots are coming for our jobs, that a lot of what's done in AI in IT is going to be obsolete because of machine learning. And I think that's definitely not gonna happen. But first off, AI is great for some problems and it's terrible for others. So if we imagine just a customer service line, it's very, very difficult for a computer or for artificial intelligence to act on a phone as though it was an actual person, right? Like you can almost always tell, right? It's very hard to pass the Turing test when it comes to speech, right? And something like that, having a conversation, communicating with voice with another person is an area where that humans do extremely well and completely naturally. Okay, so that's perhaps a bad choice of an area to focus on with AI. Well, maybe it would make better sense would be to supplement what humans are able to do, right? And this is, there's the huge field of natural language processing, right? Of being able to take incoming speech or incoming text and parse it and figure out what's going on. So perhaps rather than trying to have a robot answer the phone directly, instead you could have artificial intelligence listen to a phone call as it's being held by a customer service rep, for example, and be searching a database of known answers and automatically presenting those answers to the human so that they can get through it very quickly or being able to recognize things about the conversation for later review. You know, perhaps being able to sense the emotional state of the caller or the emotional sense of the state of the customer service rep and being able to improve efficiency. So again, this is a red queen situation, right? A robot plus a human will beat a human or a robot acting independently of one another. And even if you take the sort of the worst case scenario of this where humans perhaps only apply a marginal benefit over what robots are able to do on their own, that marginal difference in a red queen situation is, you know, will make a difference, right? You know, a marginal difference can make a big difference over time. So we are going to continue to see spaces where artificial intelligence will supplement what we do and improve what we do rather than replace us entirely. I feel strongly on this question. So where do these machine learning tools and techniques lead IT? So there's a major shift happening in IT right now that's similar to the shift that happened when other industries adopted lean techniques that they got from the manufacturing industry. And lean, you know, perhaps sometimes referred to as the Toyota way, way to drive efficiencies in manufacturing, drive down cost, improve performance and throughput. An example here that I know just because my father spent his career doing this is medical laboratories. So if you think about the way medical laboratories worked a generation ago, there were individual lab tech techs that were basically people that were trained to work as scientists that were performing experiments over and over again and trying to take samples and identify some property or another of that sample. But it really worked as a very sort of cottage industry model. And over the last generation, that has industry has changed dramatically over towards more of a manufacturing model where as much as possible has been automated. Humans are used in the areas where humans work much better than machines. Certainly in times when the humans visual sense is better than what a machine is able to do. Humans supervise, humans provide quality assurance and monitor everything. But if you can think about the testing volumes that are required today in the world of COVID that never would have been possible using the old models only through this move towards lean manufacturing techniques that the laboratories, medical laboratories are able to scale to the levels that they are today. And I think something similar is a foot for IT. So so far, only challenges with very high return and investment have been tackled using machine learning and AI techniques. So on aggregate, right across the whole industry, massive gains are still available. So machine learning techniques can improve the efficiency of your garden. So there certainly are many areas within IT that are yet to be improved. The knowledge of machine learning techniques is becoming much more widespread and the cost of the resources and tools needed to implement machine learning are falling dramatically due to commodity hardware and open source software. And really what this means is that we're on the cusp of something great as an industry. Using machine learning, the variety of problems that we'll be able to address with applications and IT techniques, the number of challenges, the type of challenges that we will have accessible to us are gonna grow dramatically, right? We're just gonna be able to do more with IT than we can right now. And that's really the exciting thing. So what are the risks? What are the things that we should be concerned about ourselves? What are the questions we should be asking where ourselves as IT professionals? And so I think that the big question that's outstanding is will we integrate data science techniques into IT in the light of what we have already learned from DAV-OPS, right? Or will data science be these off to the side as a sort of separate priesthood? Will data scientists throw machine learning models over the wall to operations teams just to run and not have tight feedback loops and not be, will the people creating the models not be part of the feedback loop that leads back to the customer or leads back to the stakeholder? And will basically, will data science practitioners become a stranger of IT in the same way that IT is unfortunately estranged from its own customers and from the business it serves? So those are the risks that we have in front of us. And I think a lot of what Red Hat is gonna discuss going forward in the rest of these presentations is how to prevent that from happening. Thank you very much for your time. My name's Zach Berry. It's been a pleasure speaking with you.