 So, good afternoon, everyone, and like Michelle said, welcome to the Open Platform 3.0 track. Over the next 40 minutes, give or take, I'd like to have a discussion with you about AI, semantic interoperability, and DevOps. As mentioned, I'm both an architect at IBM, and also serving currently as chair of the Open Platform 3.04. So what is this talk? This is not a talk where I'm going to define AI in very strict terms, nor is it a talk where I'm going to go deep into what DevOps is and what DevOps isn't. This talk is a review of experiences that I've gained over running multiple AI implementations. My hope is that by the end of this discussion, we will all walk away with a shared understanding about AI projects, common gotchas with AI projects, how DevOps can be applied to try and address some of these common pitfalls, and the role of semantic interoperability in creating successful AI implementations. For anyone who may not know, AI systems are hard, but the reason that they're hard is for reasons that some of which are common known problems in IT space, but some of which are novel problems new to the AI domain. It's my belief and experience that if we look at some of the rich body of knowledge that DevOps practitioners have created, we'll have an opportunity to start to address some of these pitfalls. So first, let's level set about where we are in AI. I always like to start AI discussions with a timeline because it helps properly illustrate where we are in context of where we've been and helps to hint at where we're going. AI as a term has been around longer than I've been alive. It's likely been around for many of the folks here's whole careers. It's not a new idea, but it's an idea that only recently has become real broadly commercialized. I like to think of the beginning of AI with the invention of the Turing test around the 1950s, where there was this concept of a test that we could use to determine if a machine was intelligent. And the initial test that Alan Turing provided, which most of you are likely familiar with, very simple test. An AI system can be considered intelligent if it can fool a human being into thinking that it is a human. Now the way that this test has been used to evaluate systems over the years is through a terminal. On one side, you've got a set of human judges, and on the other side, you've got a system. And at the time the Turing test was invented, it was thought that, you know, no machine would ever be able to do this. You had to jump forward to the year 2014, there's a system named Eugene, and that system is able to be one of the first to actually pass the Turing test. Now here in 2018, this isn't that surprising. But at the time the Turing test was first created, it was unthinkable that a machine would get there. Another example that I like to use is the example of a Roomba. First coming out in 2002, everyone knows what a Roomba is. Everyone's seen a Roomba. Many of us might own some, but even if we don't, we know what it is. It's a common product that we all get, a little robot that drives around and vacuums your carpet for you. But if we look in the past, back to 1966, we can see a system that resembles much of what a Roomba does without the usefulness of vacuuming. This 1966 system, named Shaky, was all about reasoning how to navigate through a room. Now, again, we all have Roombas or we've seen them. That's not amazing or impressive anymore. But at the time, Shaky was groundbreaking. And you can actually find videos of this. If you want to really be humbled and blown away by the progress of technology, look up a video of Shaky and you'll see the system in 1966 chugging through an obstacle course and successfully getting through it. The downside of it, you'll notice, is that sometimes the system takes a few seconds to figure out where to go. Sometimes it takes a few minutes. Sometimes it takes quite a bit longer than that. So compare that system, breakneck at the time, to now Roombas, which are everywhere, right? That's not impressive. Nowadays, it's hard to find a commercial customer facing system that doesn't have some aspect of AI in it. Almost all of us here have some kind of smartphone. If it's Android, you've got a Google Assistant. If it's Apple, you've got a Siri. If you don't have a smartphone, you might have a home assistant, such as Google Home or Amazon Alexa. Outside of the realm of customer facing AI, we have huge enterprise AI projects that have succeeded. We had IBM's Watson, which won the game of Jeopardy, which has followed on to look at oncology and continued on to debate people over any topic. You have Google with their AlphaGo system, which over several iterations got progressively better at playing a game that prior only humans could have played. So what I see in this timeline is that AI, though it's not new, is really coming into its stride. AI is everywhere, from the phones that we carry in our pockets, to the largest enterprise systems that are being developed and deployed today. Now, who here remembers the first application they ever wrote, if they're a technologist? If you're not a technologist, the first system you ever designed, the first work product you ever created, right? Everyone remembers the first one. In a way, that's a bit of a career-defining moment. It shapes the rest of your path. It starts building that professional context. I remember very clearly that first program that I wrote. It was a bit of C++ code. It was C++ for no other reason that the university that I started learning programming in used C++ as the first language. I've not used it since in my career, but it was the first. Now, I remember the process of going through and understanding the way this particular language worked, understanding the compiler and the role it played, understanding header files, understanding the nuance of how code was executed based on how I wrote it. What I didn't know is that throughout my career, as I would learn more things about programming about different domains, everything I knew would become irrelevant and have to become updated. For me, this first happened when I wrote that. A very simple web page. This simple web page was written in my first experience with IBM back in 2011. I'd never done web programming before. As great as my university was, that was a senior class. And here I was being asked to write a very simple web page. Now, I got through it, but what I realized as I got through it is many of the assumptions I made about programming, about technology, about how these systems work was kind of irrelevant. This idea that you wrote code and it had a line order, and based on that line order it would execute, that doesn't apply when you're dealing with client-side code. That doesn't apply when you're dealing with web programming. What's more, some of the security that I had appreciated about writing applications that just ran on a system went out the window when I realized the horror of dealing with user browsers, some of which are standards compliant, some of which are not. Some of which are up-to-date, some of which are not. But in this process, I felt like my world was flipped upside down. And a few years later, it happened again. I was introduced to truly asynchronous programming. Here I had an opportunity to learn about web programming taken to a larger extent where I'm now able to integrate a number of third-party systems. Though I thought I had known what it was to write a web app, asynchronous programming, again, redefined what that meant, because now I had a whole different set of assumptions and concerns that I had to keep in mind. And the original set of assumptions that I'd finally been happy with were no longer relevant. This is a pattern that's very common in technology, but it is a pattern that is somewhat common across all industries. For me, AI is the most recent of these paradigm shifts. AI, to me, is no different than it is... The jump to AI is no different than the jump from writing programs that run on your system to writing web apps. It is yet another one of these paradigm shifts that are so common in technology. Now, dealing with paradigm shifts isn't a skill that only technologists have claimed to. In any industry, if you're in a strategy role where you have to look forward and understand what's coming so you don't get disrupted, you know this pattern and you know it well. And as you go through it more and more often, it becomes more second nature to you. I mean, many of the folks here in this room, we've heard over the past couple of days all these great ideas about how we're addressing the problems of today with a thought to the problems of tomorrow. Now, what you may know, though, is that this skill of rapid adaptation isn't a skill that everyone necessarily develops. This is why some of the practices that we have, some of the events and the opportunities we go to, such as this one, are so critical because it gives us an opportunity to understand what the rest of the world is going through. So AI, though it's the latest of the paradigm shifts, is just another paradigm shift. So what is AI? This is the only slide where I'm gonna provide any kind of definition and I will avoid defining it in a very explicit manner. I'm going to offer that there are two main categories of AI. The first category is what I'll refer to as general purpose AI. These are frameworks and systems that are incredibly powerful, but they give you a toolbox that you use to create a model to process your data. These systems, such as PyTorch, TensorFlow, Cafe, are extremely powerful and they can be used to create any kind of model to match almost any kind of data. The caveat is that you need kind of a PhD in machine learning, maybe one in AI, at a minimum of bachelor's in statistics to make this work. There are folks who are really, really powerful in this domain, but it requires a pretty deep skill set and a pretty deep investment to get useful impact out of these. And I'm fortunate to know some of the folks who succeed in this realm. I'll be honest and say, this is not the domain where I live. Where I see a lot of AI adoption is in the realm of what I'll call specialized AI. Specialized AI refers to AI capabilities that have been commercialized as general purpose, use case driven capabilities. So an example of this is computer vision. IBM calls it visual recognition, Microsoft, Google, Amazon, others have their own versions of it, but it's a simple black box API service where you send the picture and you get back a classification. It's that simple. For anyone in the room who's used computer vision by hand, you know it's not that simple, but these capabilities now exist where it's easier to get AI into an application than it is to pull tweets. Because with an AI service like this, you just throw data at it and get a result back. What these specialized AI services have done is drastically lowered the barrier to entry for AI in applications. A developer who's still midway through college can easily pick an AI service and throw it into an application that they're writing. They may not even have to fully understand the AI implementation to do this. Now, this is a very, very powerful thing, but it also creates some risk because the folks interacting with these services may not have a full appreciation of the use case that they're addressing. And AI-driven applications really need to be tailored to the use case to the domain that they're dealing with in order to be successful and drive business impact. This is where we see new roles coming into AI projects. One of these roles is very familiar to us. Another, though familiar, has not traditionally been a key part of many software projects. This is the data scientist and the domain expert. So in the world of our general purpose AI systems, we understand quite clearly we need our data scientists. But they're now going to start taking a role in the enterprise where not only are they generating insights, but they're generating models that drive behavior and business capability. They're generating a capability that becomes almost like a system that other applications of the enterprise have to interact with. At the same time, we have folks who are domain experts, line of business owners, et cetera, who are becoming part of the team that are driving these projects. Because we can use any of these AI services to do anything we want. But for them to really be impactful, we need to get line of business, the folks who know the process well involved so that they can educate us on what this AI system should do. This is tricky because these teams, these new roles work very differently than traditional software development teams. In the world of Agile, in the world of Scrum, in the world of DevOps, we don't like meetings more than 15 minutes, right? We have our stand-ups. We meet once a day in the morning, then you leave us alone so we can go code. We can get real work done, then we'll meet again the next day. And I'll tell you all the great work we did. With domain experts, for example, part of what they have to do is they have to have long, lengthy conversations to establish exactly what does data mean in the context of their use case. You wouldn't believe the kind of conversations one can be a part of with these folks where they argue semantics for hours on end, but you later realize as the project gets to completion that that discussion was critical because it defined the key part of how your AI system was able to interpret what it should or should not do based on the data and the environment around us. Now the tricky part of all of this is that these new team members, though they have different ways of working, they're still part of this larger team that has to deliver something. We still have to deliver a functionality at the end of the day, whether it's a web app, whether it's a mobile app, whether it's a microservice, there's something that has to be delivered. But now we have this additional set of dependencies and this additional workflow that we have to account for. So we know that we're gonna need to bring new skills into AI, but there's additional considerations that one must take into account when you're working on an AI project. The first is that many AI systems are non-deterministic, meaning you don't have a clear way of knowing what's gonna come out of them, and what's more, you don't have a direct way to manipulate that behavior. I liken many AI systems to pets or to human beings. You can teach a pet, you can try to teach a child, but you can't force the behavior that you want. This particular dog belongs to my parents. She loves getting up on that chair and smiling when she does it. My parents have spent years trying to teach this dog not to do it. They've tried everything they can think of. I visit them all the time and I try to get this dog off that couch. Now this is a relatively recent picture. They got rid of the whole couch, but they kept the chair because the dog still wants to get on the couch. Despite any effort, they still can't get this dog to behave the way they want it to. Because all they can do is provide guidance and stimulus. They can modify training and take different approaches, but they cannot open up the dog's terminal and type in dog, you know, get off couch. And AI systems are similar, where we can manipulate the training data and tweak configurations of the system, but we can't force the outcome we want. Another aspect that has to be considered with AI systems is that the input data is unpredictable. Now this isn't just unique to AI systems, right? Anyone here who's ever written a system that had users knows that you can't trust your users. But where AI systems have additional complexity is that they can handle a larger variance of data. When you write an AI system, when you train it, you train it for a particular domain. Again, I love the dog examples. This is my dog. She's a good dog. She's obedient most of the time and she loves people. She knows that when we go on a walk, she's not allowed to pick up any sticks. This is because anytime she picks up a stick on a walk, the walk stops and it becomes a stick delivery mission. In any hope that our dog does something productive goes out the window because our dog will proudly guard that stick. So we train our dog and she knows now not to pick up sticks. If she sees a stick on the floor, she knows I'm not supposed to pick that up. But what happens if she sees something that is similar to a stick but is fundamentally different? What if she sees cinnamon? It's never happened but it's a good example. What if she sees that? We didn't train her not to pick up cinnamon. It's similar to a stick in the sense that if she picks it up, the walk becomes unproductive. But none of the training we gave her introduced this kind of stimulus. In the same way AI systems that you deploy, whether they're internally facing or externally facing, they will always have an opportunity to come across data that is nothing like the training that you've provided it. Because of it, because of that fact and the fact that AI can deal with loads of unstructured information, we have to be careful when we design training sets. We have to consider the variance that we can't foresee. And we have to figure out ways to validate our underlying models to ensure that they still work no matter what we find when we go outside. So in the world of DevOps, there was a metaphor that was popularized about cattle not pets. This idea was used to convey the shift from applications running on servers that we deeply cared about how each server was doing to the movement towards microservices and cloud-based applications. And most folks here are probably familiar with the idea that if you're running an application in the cloud and it fails, you just restart the application in the cloud. You've not lost anything. Your database is standing up somewhere else in another cloud or in the same cloud. That was the term cattle not pets. Yet in the world of AI and AI models, this metaphor is brought to question. Because as we build AI models, we invest hours of expertise into these systems. We sit down with some of the most valuable experts in our team, whether they're our data scientists or our subject matter experts. And we build this model and we tune it to be successful for our domain. But what happens when that domain changes? What happens when, as a result of our project, we learn something new that teaches us about new dimensions of our data that we need to consider? There's a decision that has to be made. Do we take this existing model that we've spent a lot of time on? And do we extend it? Or do we create a separate model specialized for this new domain? This is a decision that has to be made whenever an expansion of scope is brought into account. And it's critical because as you grow an AI model, you increase the complexity of that underlying model. My dog, for all the things our dog is good at, my dog does not know how to handle clothes. If I put a hat on her, any kind of outfit, the hat's my favorite, though. I think it looks adorable. If we put anything on her, she freezes up. She's not unhappy, but she has no idea what to do. Because that's not something she's good at. It's not something she's trained on. She knows not to pick up the sticks. I'm not sure about cinnamon because we've never found it yet. But when you put clothes into the equation, all bets are off. Now by comparison, my in-laws have a dog that was a show dog that it was raised to be shown and to be ran around the dog shows. That dog doesn't have an issue with clothes whatsoever because it's a separate model. It's a separate instance that's been trained for a different domain. This is a huge consideration that I believe every enterprise is going to have to take into account. Because as you invest in your models and you start to grow their scope, there's a necessity to determine when do we say that this model's domain ends and a new model's domain begins? And finally, with AI, there's a new level of risk that we're aware of, but we haven't really fully grasped yet. Because what happens when you have bad AI? Well, maybe the bad AI misunderstands someone's order for a pizza to be a large when they really want it a medium. And OK, we can correct that. But what if you have an extremely complex, large AI model that's connected to every critical business process? And what if it mischaracterizes data and you have no way of figuring that out because of all the complexity that you've hidden within the model? Because of this, bad AI is extremely risky. And this is part of why we see enterprises being careful and cautious in the way that they're adopting them. Now, I believe these problems of AI can be addressed. There is no silver bullet to handle all of them. It is always going to be a case by case analysis. But one approach that I have found that is helpful is to look to DevOps. In the world of DevOps, the world of modern software development methodology and modern software development culture, we've learned a lot of lessons recently about how to move quickly, about how to fell fast, but also how to fell forward. And about how to give ourselves the foundation of stability that ensures no matter how fast we're moving, we still have a site that generally works, most if not all of the time, that we still maintain a high level of user experience. Many of these ideas can be adopted to AI projects to help address some of these risks while we're still figuring out how to fully operationalize artificial intelligence as a capability. For example, with continuous testing and integration, we can create an approach where we can consistently validate our AI model whenever it changes. And we can also enable those changes to be pushed forward as soon as they're validated. We can look to automation to take this continuous testing process and take that workload out of the hands of our data scientists and domain experts. I mean, these folks, their value is in their expertise, not in the work that they do to simply validate the model. So we can automate that and enable them to be more productive. But perhaps most importantly, we can look to collaborate across disciplines. We can look to find new ways of empowering our domain experts and empowering our data scientists to be equal members of the team. We can include them earlier in the process the same way security is now being part of DevOps projects. For that, you might hear it described as DevSecOps or SecDevOps. I don't know which one is right. And don't ask me for what we call that applied to AI practitioner projects. I can't even imagine what would be a good name for that. Now I'd like to walk you through an example of a project I'm currently leading where we've built an AI system. And I've taken some of these thoughts and put them to practice. For this particular example, we built a simple system, relatively simple. An AI chatbot for career development, internal to my company. This system had a relatively narrow scope. Its purpose was to answer commonly asked questions for technical career development pathways at IBM, some of which included certification requirements, which includes open group certification. Exciting thing. We're fortunate to have a number of domain experts involved with us, folks who represented the company from around the world and who were currently guiding folks through these questions. These were the folks who lived the day-to-day experience of answering questions all the time. But what we noticed was these domain experts came from around the world. They were all domain experts on the same domain, but they didn't agree with each other. And as I worked with them and helped them reach consensus, I realized they were all right. These domain experts had an experience reflective of the part of the company they represented, whether it was a functional part of the company like our consultants, whether it was a geographic part of the company, such as North America versus Australia. And all the perspectives that seemed to be in conflict actually were in agreeance. But there was a requirement for us to get to that level where everyone could appreciate the common themes around these different experiences that seemed to contradict. To give you a sense of scale, this system had five data sets associated with it, all of which were changing rapidly. The first data set was a set of input classifiers. These were the words we used to describe how our system would understand a user's sentence, a user's input. It would understand if the user was providing a greeting or if the user had a question about career roadmap or a particular profession. For each of those classifiers, we had a set of training data. We had a set of object classifiers to describe the subjects that are important to our domain, such as the specific list of professions of business units of countries. And we had a set of training data for each of those classifiers. And finally, we had a logic tree. Think of this as the execution code for our AI chatbot system. Now all of these data sets move very quickly. Over time, a lot of changes have been requested because new thoughts are created every day about how users are going to interact with the system. Guidance starts to evolve almost weekly. And so the system's logic tree has to be updated to reflect the most up-to-date guidance. So the first thing we did is we treated all those data sets as a single piece of code. We treated them like releases so that every time we had a new version of guidance, every time we had a new set of training examples or a new set of classifiers, we could write that to a repository and treat it as if it was a new release of code. This made it very easy for us to have the ability to promote code from one environment to another and to do a backup in case we had any kind of catastrophic failures. We added testing on top of this. And we used automation to do it. We established a sanity test, a baseline of what users might say and what our system should interpret from it. And of course, we used automation so no one had the job of testing this system because sending hundreds of messages to an AI chatbot is not something anyone wants to do. And at the speed of which we're making changes, it's untenable to have a single human being doing any of that. But the most important thing that we did out of all of this was that we looked to empower our domain experts. So when the project started, I was brought in because I have an expertise in many of these AI systems. And I worked to build the initial foundation of how the system, the set of object classifiers, the set of input classifiers training data. But over time, it was clear that it wasn't a tenable model for me to be the one making changes because I didn't really know this domain as well as our HR team did. I didn't know it as well as our career team did. And they outnumbered me dozens to one. So we worked to enable them to understand the tools that are provided with this system. And this narrow, specialized AI is built for business users. It has a UI that is actually very user-friendly. But we worked deeply with this team to enable them to understand the underlying models, the basics of how it works, so that when they had an idea, when they came to a consensus, a discussion about how users should be engaged, they were empowered to make the change. And now I'm no longer directly part of these model updates. In fact, all I hear about are new test examples. The only time they bring anything to me is when they have a new idea and they want to validate that they've approached it properly within our system. This project has gone fairly well because it's representative of the expertise. The experts are the ones who are shaping the system. They're the ones who engage with users daily. They're the ones who answer emails every day, who answer chats, phone calls, and occasionally even text messages about this domain. And now they're empowered to run wild. And because we've used some of the lessons from DevOps, we have stability in this. We have testing. We have the ability to roll back. And the off chance, anything bad happens. And with the power of cloud, we can do that relatively quickly. Now this system is now running and chugging along. We're getting data every day. But the plot thickens. This is one of many HR chatbots across IBM. I actually learned this throughout the project. Halfway through the project, people started jumping out of the woodworks asking, hey, I'm in HR and I want to build a chatbot. Can you help me? And I enabled these people. I sent them to the training. And they're off doing well. But now we have a party full of HR chatbots. Every system does something really well. But outside of that domain, it doesn't do so great. So how do we handle that? How do we deal with all these different AI systems that are dealing with the same kind of data? They're dealing with internal employees asking questions about career and certification, promotions in some cases. We know what each system does well. And perhaps if we can detect when someone asks a question about that domain, we can pass that question forward to that other system. But how do we communicate to that other system all the information that our system has received from the user? If we can establish some kind of shared context, we can create an environment where future systems are able to quickly take their place in this ecosystem of IBM AI systems. So how do we get started? Well, there's two questions we have to consider. The fundamental one is how do we integrate these chatbots? They're all in adjacent domains. They're all HR centric. But how can we enable them to pass relevant information to one another when none of the systems knows what is relevant to the other systems? I see this as a semantic interoperability problem. I see it as such because all of these systems are dealing with a fundamental similar set of data. Each system has the ability to recognize when something that it knows has occurred, like the system we built understands certification targets. So my system can recognize when an individual is interested in certification, we can recognize what level certification they want, and we can align that to a career specialization, such as technical specialist or architect. But if I'm to pass that information on to a third party system, we need to have some agreed upon way to understand how that data would be represented. An interoperability standard, such as the open group's own open data element framework, provides such a model for us. Standards such as ODEF give us a way to clearly delineate what data is so that if I have text, if my system has gathered data from a user, such as their first name, their last name, such as their phone number, such as their internal employee ID, such as their current band level, through ODEF, we have a classification system that we can use to describe that data and save it in that context. So if I pass a question out of my domain into this greater community of IBM systems, these systems, as they get my query and start to process, can now look for already known and agreed upon data in that context. And it provides an opportunity to create integration by consistently describing the data that we've gathered from our user. Now, this kind of approach can even scale beyond the simple chat box system. Through my interactions with the rest of IBM's internal HR systems, data can be collected that can be stored in a profile about where I am as an employee, my satisfaction with my job, my interest in other companies, so that when I engage with these systems, if they can recognize that that's a factor, they can engage me in a more personalized way to try to keep me from leaving the company, to try to suggest to me that I try involving myself in other company initiatives to help improve morale in my identity with a company. Now, an important part of ODEF that can't be, that really needs to be stressed when we talk about anything chat related, is a language independence. You'll notice in the three examples I provide for person last name, first name, and identifier, you see a numeric identifier followed by an identifier in local language English. ODEF provides both of these as appropriate ways to describe what data is. So if my system receives a question, it can translate that question to another language and send it to another system. But through some of these numerical based representations of context, we can also send a long relevant context, independent of how the local language describes it. This provides a level of interoperability that amongst many AI systems doesn't exist, because many are bound to language limitations. To illustrate the point further, here are other examples of attributes that I can represent using the open data element framework. Now, ODEF as a system really provides a rich set of objects and properties that we can use to convey what data is at a very fundamental level. But it also contains things that we call plugins and extensions that can help us describe even further types of data by leveraging external standards. Standards such as the ones that have been created by the UN around units of measure, around products and services. So this system has been designed to enable us to describe any kind of data that we come across. Without a doubt, AI has unique challenges. But it's clear that there are opportunities in AI. And I would offer that if we look to adopt software engineering practices, DevOps methodologies, a culture of openness and inclusion. And if we keep an eye to semantic interoperability, you can make almost any AI implementation successful. Thank you. Now, I do believe we have some time for questions. If I'm reading the clock correctly. Oh, are there any questions? Is that Michael on the roaming mic? There it is. So Michael, with respect to using ODEF for actually helping decide which domain adaptation to select, which is what I assume your premise was. So first off, can you answer that? I mean, I assume that ODEF was being used in your mind to decide or to help inform which particular domain adaptation is most appropriate to answer or to be engaged in a particular sub-domain. So that's one approach. And I'll be candid. That's not the approach I was thinking of. So thank you for that idea. What I was thinking of ODEF's application here is that my system can interact with a user and can collect certain data about the user, can establish what we call context, can understand what my job level is, can understand my current role, it can understand my certification interests. And when I pass a question to an external system through ODEF, we can pass that information along with it in a consistent manner that that third-party system which has no awareness of what conversation has occurred can reach up and check for any of those variables if they're meaningful to the kind of guidance they would provide. For instance, if I'm an architect and I'm interested in exploring the specialist certification, a specialist system could be aware that I'm an architect at a certain level of certification and can suggest to me that some of the skill sets and some of my package for architect might also be relevant experience for my technical specialist certification. So it's really more of a way for third-party systems to have a shared sense of variables that they can reference, even if they don't know what conversation has occurred prior, because these third-party integrations in some of these AI systems can almost occur in a kind of stateless fashion. So without having a shared vocabulary to describe what's happened already, it's hard to create a consistent integration unless you do point-to-point with every single system. So have you considered putting this together as an example? It's one of the ideas, yes, sir. I think you're gonna have to prove that out to make it real. I do think it's possible to actually use UDEF to basically maybe delegate to subdomains because your point about having different domain adaptations that are slightly different, but along the same subject is a challenge, you know, because what you're really pointing out is, in your HR example, is this idea that there's gonna be a proliferation of AI chatbot solutions, which are, you know, like a VIN diagram, slightly overlapping, but not completely domain-specific to any one of the implementations. So potentially somebody would have a hard time finding the right one if there were a catalog of 30, 40, or 50 of these things over a period of time. So getting to the right one is going to be a bit of a challenge. And I can see that happening and especially in medical or any other deep domain where there's lots of subdomains and tracks along the way. Definitely. Thank you. I think it's a good trial from one angle side, one viewpoint side. And we wish that we should, you know, this is a good start, okay? A good start. However, that we should invite some of those volunteers which has been already doing, try to make a standard in AI in the world. Some famous guy also involved already. So let's invite all the other guys who have the same concern, concern 100% agree. You know, if we are going to go as of today's way, we're just making any special new application that I am the best. I am the best. We have that better. It's very good for sales, for some vendors, but it is not good for the whole society at all. So your target, we should make a standard from those. 100% agree. So we should have, however, invite a lot, many different industry by domain. They have a different idea, different processes also. So maybe a domain specific means industry centric would be a good idea. So talk to Elon Musk. Understand that he has his own baby corner, his personal money spending also to make a standard for AI. That was not understood, but I did not talk to him yet on that subject. So the point is very simple. Good idea, good direction, good processes, but we need more friends to come together and try to drive out through a talk-off approaches, okay? Listen to the stakeholders' different viewpoints and have them reset up all the way. And then let them decide which way we should go, not we or I, that way. So small talk, my presenter for you. Great, good start. Thank you, Jack. And I wholeheartedly agree that there's a necessity as we examine work in this domain, especially within the open group, there's a necessity for us to consider who are other vendors, other implementers in this space, what are they seeing, and how can we make sure that any approach we come up with is representative of as many use cases as possible.